Bicycles, Buses, and AI
No technology decision happens these days without someone raising the AI question (usually loudly and confidently). So let's raise it properly.
And the first step is defining what AI actually is.
Today, when someone talks about AI, they’re usually talking about a relatively recent development: Large Language Models, also known as Generative AI. This is your ChatGPT, Claude, Copilot, Mistral, etc.
But AI has been around in one form or another for decades. The term itself was created as part of a marketing move to get financing for research in the 1950s during a funding freeze.
Since then we’ve gone through several technological cycles, from neural networks to machine learning. At each step, the “Artificial Intelligence” branding gets applied, as it sets grandiose expectations of computers as simulated brains and brings science-fiction scenarios to mind. And, each time, it overpromises and underdelivers.
Even today, AI is applied to multitudes of technologies. The tool that lets you erase the background in Photoshop, the spam filter, the sales prediction algorithm, the self-driving car, the chatbot… They all get branded as AI, but only the chatbot is a Large Language Model.
In the book “AI Snake Oil”1, the authors describe a scenario where all forms of transport, whether bicycles or buses, are simply called “vehicles”. Replace the word “vehicles” with “artificial intelligence”, and you get a pretty good description of the world we live in now.
The frenzy today is over LLMs/Generative AI. Everything else on that list has been around for years. Some were already called AI; others have been rebranded to profit from the current wave.
Now that we’ve cleared that up, we can move on to talking about the usage and ethics of Generative AI (the one you're probably thinking of when you hear "AI").
See you tomorrow.
Colin
[1]: AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference – Arvind Narayanan and Sayash Kapoor – 2024.