Is Artificial Intelligence useful?

"AI" is a buzzword ...

... the basic concept is prediction.

There are two types of people: those who think artificial intelligence (AI) is mystical, and those who know how to create it. The former, the majority of the population, have created a buzzword around the concept, and probably do not realize that AI is usually not necessary in order to successfully complete the task at hand. What should educators be teaching learners about AI?

It all lies in the definition. Artificial intelligence is just prediction, but where the model has been tested and tuned for greater accuracy. A simple prediction, using thumb-suck parameters, would yield an accurate result. In comparison, training the model takes significant resources, so this is only necessary in a long-term setting, where the model and task have been clearly defined, and there will be cost-efficiencies when running the model repeatedly.

A predictive model brings probability into the system. The primary objective of an AI algorithm is to predict, accurately (regressions focus more on causality than other algorithms). During the training step, the values for the parameters that yield the most accurate results are chosen. Hence, using these ideal parameter values when sending the algorithm out in the wild on unseen data should give better predictive results, without the heavy, process-intensive training time. This is machine learning.

Do you use AI?

To stylize it, there are two ways in which one can "use AI":

It's been said that putting AI on your CV is good for your job prospects. Does the employer understand the difference between the two ways of using AI, outlined above? Because, the second type requires much more aptitude with understanding AI models and developing them in-house. Let's assume that the current hype around AI is about encouraging a broad range of people to explore using it more in their day-to-day lives (the first type of use case outlined above). Intersecting this with job skills, does this mean asking whether a programmer or creative content artist leverages others' AI apps to perform their work more productively?

Typing isn't that difficult. Sure, a coding assistant may increase my productivity by 20%, but is it worthwhile to go through the hassle of setting it up and paying for it, when all it's doing is saving me about half an hour per day? Although, I recognise that my relative skepticism probably stems from me being very comfortable with touch-typing (mapping my keyboard to Dvorak and achieving speeds of up to 70 words per minute during tests). Whereas, I think the majority of people look at their keyboard when typing, which is slow.

Automation (an entirely different concept, but conflated with AI) is a much more important and effective problem to focus on, in order to achieve efficiency. Or, can it be argued that AI produces better quality work than humans? Putting aside the popular posts on social media about LLMs' quantitative failings (because LLMs are designed for language, not maths), can LLMs write better or create art better than human labour?

On the writing front, I argue that LLMs improve quality from bad to good, but they cannot write excellently. Excellent writing comes from subject-specific knowledge, whereas LLMs only have general knowledge. Also, do not think that LLMs are designed to spit out a whole stream from a few keywords—the output has a lower chance of meeting your needs if you don't put in the effort to give details. LLMs are strong at generating content for social media, messages or emails, especially if a marketing agency needs to send an original, unique and different message with each iteration. The LLM is designed to stochastically change the response a bit each time. LLMs can produce very relevant lorem ipsum text when presenting a mock-up of a website or poster to a potential client.

I do not have as much experience on the visual art front. I have generated AI art on my computer locally, using an open-source model, but I suspect that the AI art you see on social media is only the cream of the crop, produced with relatively high cost.

Use-cases of LLMs

Here are some handy ways in which you can use LLMs.

Due to hallucination, one should not rely on LLMs for accuracy, if asking it to re-format text! If you think I'm missing something, feel free to comment on my LinkedIn post.

Do I use AI in my work?

I can. I know how to train a predictive algorithm, tuning it so that the optimal parameter values are used. However, simply using a predictive model gives a good result, and often it is not necessary to invest time into fine-tuning the accuracy of the prediction, as the change may be small and there can be lots of other work to do in the project.

Some examples of predictive models that I have used include:

I have used these models for imputation, forecasting or matching.

So, what do people mean by "AI"?

In public (outside of software development circles) laypeople tend to perceive AI models as having minds of their own. Sure, it can be difficult to describe how some models are structured due to their inherent complexity (the so-called black box) but it is important to remember that software developers know how to create AI models. 

We should appreciate the concern that AI development is proceeding too quickly by understanding what that means. It doesn't mean that androids will wage war against humans using guns, but it does mean that software can independently manipulate the complexity of the internet to influence real world outcomes.