How are jobs changing?
There are two types of people: those who think artificial intelligence (AI) is mystical, and those who know how to create it. The former, the majority of the population, have created a buzzword around the concept, and probably do not realize that AI is usually not necessary in order to successfully complete the task at hand. What should educators be teaching learners about AI?
It all lies in the definition. Artificial intelligence is just prediction, but where the model has been tested and tuned for greater accuracy. A simple prediction, using thumb-suck parameters, would yield an accurate result. In comparison, training the model takes significant resources, so this is only necessary in a long-term setting, where the model and task have been clearly defined, and there will be cost-efficiencies when running the model repeatedly.
A predictive model brings probability into the system. The primary objective of an AI algorithm is to predict, accurately (regressions focus more on causality than other algorithms). During the training step, the values for the parameters that yield the most accurate results are chosen. Hence, using these ideal parameter values when sending the algorithm out in the wild on unseen data should give better predictive results, without the heavy, process-intensive training time. This is machine learning.
To stylize it, there are two ways in which one can "use AI":
One can use AI products that others create, such as ChatGPT, Gemini, Midjourney, GitHub Copilot, Gemini Code Assist, Google Search, Google Maps navigation, browsing Netflix, the YouTube home page recommendations, Facebook's News Feed, or searching on Amazon or Takealot's online stores. (The general public discounts AI inventions over time.) Or,
One can build software that trains models on data banks, including a set of true, correct outcomes.
It's been said that putting AI on your CV is good for your job prospects. Does the employer understand the difference between the two ways of using AI, outlined above? Because, the second type requires much more aptitude with understanding AI models and developing them in-house. Let's assume that the current hype around AI is about encouraging a broad range of people to explore using it more in their day-to-day lives (the first type of use case outlined above). Intersecting this with job skills, does this mean asking whether a programmer or creative content artist leverages others' AI apps to perform their work more productively?
Typing isn't that difficult. Sure, a coding assistant may increase my productivity by 20%, but is it worthwhile to go through the hassle of setting it up and paying for it, when all it's doing is saving me about half an hour per day? Although, I recognise that my relative skepticism probably stems from me being very comfortable with touch-typing (mapping my keyboard to Dvorak and achieving speeds of up to 70 words per minute during tests). Whereas, I think the majority of people look at their keyboard when typing, which is slow.
Automation (an entirely different concept, but conflated with AI) is a much more important and effective problem to focus on, in order to achieve efficiency. A lot of apps that advertise that they use AI aren't really using AI but mostly automation and algorithms. Can it be argued that AI produces better quality work than humans? Putting aside the popular posts on social media about LLMs' quantitative failings (because LLMs are designed for language, not maths), can LLMs write better or create art better than human labour?
On the writing front, I argue that LLMs improve quality from bad to good, but they cannot write excellently. Excellent writing comes from subject-specific knowledge, whereas LLMs only have general knowledge. Also, do not think that LLMs are designed to spit out a whole stream from a few keywords—the output has a lower chance of meeting your needs if you don't put in the effort to give details. LLMs are strong at generating content for social media, messages or emails, especially if a marketing agency needs to send an original, unique and different message with each iteration. The LLM is designed to stochastically change the response a bit each time. LLMs can produce very relevant lorem ipsum text when presenting a mock-up of a website or poster to a potential client.
On the visual art front, I've used Gemini to create images and videos. Earlier, I experimented with an open-source model running on my computer, but since then Gemini has developed its art feature considerably.
Here are some handy ways in which you can use LLMs.
Generate a unique human language test, to test you on a human language, such as isiXhosa. With Gemini Pro, click on "Canvas" mode, then activate the quiz mode, then ask the GenAI for a test on the language that you are learning.
A test such as this can be generated for any academic subject, which facilitates learning.
Help with creating new teaching resources quickly, such as presentations, informational overviews, quizzes or worksheets, or ideas for lesson plans. These should be checked for quality.
Summarize an extremely long speech or unstructured document. This could be a paragraph, before you dive in, or to generate a title or teaser with good SEO. Articles already have an abstract, and reports nearly always have an executive summary.
Upload a file to the app, and ask questions about the file.
Write a fairly good legal contract or letter, at no cost. LLMs have good legal knowledge.
Help with brainstorming lists of examples on a particular topic.
LLMs are pretty good at using markup languages, to save you time.
Coding copilots can predict what code you want to type, making generic programming typing faster. Programmers still need to understand how their code works, and some tasks are so detailed that I don't think a copilot could find information out of the programming environment, such as when unstructured data are being manually structured. A coding copilot is useful for short fragments of code, such as:
a simple chunk of code in a language that you do not know
writing a function quickly, saving the time that would be spent looking in the documentation
Generate a definition of a technical concept. A page can be generated and easily copied in Markdown format from Gemini, to paste into your company's documentation. Another use case is that a short definition of a concept can be added to metadata, in instances where a human may find that task too time-consuming to create accurate, authoritative, simple definitions themselves, so this can enrich metadata. LLMs are usually strong with simple general knowledge, where they hallucinate more with specific expert knowledge.
Filler text, more relevant than lorem ipsum, when mocking-up a website or poster.
Improve poor-quality text, improving grammar and making the tone more polite, before sending a message.
Injecting randomness when re-generating a short message over and over again (based on the same prompt), for variety, maintaining interest in the post if it is repeated.
AI art generators can create unique art, to go along with a blog post. I've reflected whether taking this up as a hobby would constitute creativity or not, but AI art can help to visualize unreal visions. Please use Canva instead of an LLM to create a poster, as GenAI is much more resource-intensive.
Due to hallucination, one should not rely on LLMs for accuracy, if asking it to re-format text! If you think I'm missing something, feel free to comment on my LinkedIn post.
Yes, in terms of generative AI (GenAI), I ask Gemini a few questions each day, and every now and then I get programming assistance from Gemini Code Assist, in my IDE.
In terms of the old-fashioned type of AI, I can train a predictive algorithm, tuning it so that the optimal parameter values are used. However, simply using a predictive model gives a good result, and often it is not necessary to invest time into fine-tuning the accuracy of the prediction, as the change may be small and there can be lots of other work to do in the project.
Some examples of predictive models that I have used include:
Cross-sectional and panel regressions
Tuning can be done by selecting significant explanatory variables.
Time series forecasting
Tuning can be done by choosing a model with a low AIC value.
KNN classification
The number of neighbours can be tuned so that accuracy is maximized when training.
Fuzzy string matching
The maximum distance metric between the input string and best match can be tuned to minimize false positives, when training. However, checking the matches is necessarily very manual (possibly intensive), hence training is not necessary.
I have used these models for imputation, forecasting or matching.
In public (outside of software development circles) laypeople tend to perceive AI models as having minds of their own. Sure, it can be difficult to describe how some models are structured due to their inherent complexity (the so-called black box) but it is important to remember that software developers know how to create AI models.
We should appreciate the concern that AI development is proceeding too quickly by understanding what that means. It doesn't mean that androids will wage war against humans using guns, but it does mean that software can independently manipulate the complexity of the internet to influence real world outcomes.