Overview:
AI has demonstrated impressive talents, and new uses are being discovered every day. However, the existing AI programs are single-purpose. They aren't the futurist dream of an artificial general intelligence that can solve any problem and rapidly improve itself. And there's no telling how far off such a thing actually is.
[Previous: AI is getting scarily good]
Artificial intelligence that leaves human brains in the dust is inevitable, and it’s coming sooner than you think. In the very near future, it’s going to utterly transform the world.
In a few decades—or maybe less—AI is going to unleash an economic revolution without parallels in history. It will solve every outstanding scientific question, plus the ones we haven’t even asked yet. It will free us from material scarcity, propelling us into a utopian future. We’ll become transhuman gods, living forever in perfect bodies, our uploaded minds dwelling in virtual heavens.
No, wait—Artificial intelligence is coming, but it’s not going to be good. It’s going to be catastrophic. Like Frankenstein’s monster, it will rampage out of control. It’s going to overthrow humanity and conquer the world. We’ll be wiped out, or reduced to servitude, or possibly something even more nightmarish.
Whichever way you lean, this debate which used to be the province of futurist geeks and sci-fi authors has suddenly erupted into public discourse. The release of tools like ChatGPT and Stable Diffusion has furnished a dramatic demonstration of how rapidly technology is improving. As I’ve said before, it’s already scarily good.
AIs can engage in natural-language conversation. They can make original art. They can describe the contents of a picture. They can read X-rays, perform surgery, and predict protein structures. They’re being used to design better computer chips and to find new antibiotics for dangerous bacteria. Companies like ElevenLabs and 15.ai have AI tools that convert text to natural speech—and if you upload an audio file of a real person speaking, it can clone their voice.
We have liftoff
But these technologies, transformative though they may be, aren’t what people are either dreaming of or dreading. The fear and the hope of AI researchers is that, one day soon, we’ll create an artificial general intelligence, or AGI. This system won’t be limited to specific tasks, like driving a car or making art, but can solve any problem we ask it to.
The AGI will think at machine speed, the billions of cycles per second that silicon is capable of, rather than the creeping pace of squishy organic brains. With its inhumanly fast cogitation, AGI will far outpace us. It will be able to solve problems that have long baffled us.
What’s more, we can copy it from one hard drive to another, just like copying any other computer program. Rather than the years it takes to birth, raise and educate a new human being, it will spread and multiply just as fast as we can build hardware. This will further increase the cognitive resources available to it.
AI’s biggest boosters speak as if the Singularity isn’t just inevitable but imminent. But there’s a wide gulf between these fantastic predictions and what the technology is currently capable of.
All this is mind-blowing enough, but there’s one more step to consider. What happens when the AI turns its intelligence to the task of improving itself?
The original, AGI 1.0, will create descendants that will surpass it just as it surpassed us. And those descendants will design even smarter descendants, and so on. In a very brief interval, this accelerating improvement will create a transcendent intelligence that will be as far above us as we’re above amoebas.
This scenario, called the intelligence explosion or the Singularity, has been a staple of futurist prediction for decades. Once it happens, they say, we’ll have lost any chance—for better or for worse—of controlling what comes next. Our only opportunity is to get in now, at the very beginning, to shape the trajectory of AI development before it outruns us.
Heaven and hell
The utopian post-scarcity scenario and the machine-rebellion scenario—what you might call “AI heaven” and “AI hell”—are older than the computers that could make them possible. Science-fiction writers were depicting both scenarios, especially the latter, long before they were remotely feasible.
The obvious antecedents are The Matrix, HAL 9000 from 2001 and Skynet from the Terminator franchise, but there are older ones. Dune had the Butlerian Jihad, a galaxy-wide religious taboo against machine intelligences. There’s also the famous 1967 story “I Have No Mouth, and I Must Scream” by Harlan Ellison, where an AI built to wage war takes over the planet and wipes out humanity.
What the doomsayers are most afraid of is the sorcerer’s apprentice problem, where an AI does what it’s told to do rather than what we wanted it to do. Every AI has a utility function, a section of code that defines the system’s goals in software. The fear is that if we don’t craft this function extremely carefully, we’ll get AIs that engage in technically correct but disastrously unexpected behavior.
The AIs that exist right now are single-purpose tools. They don’t have the ability to branch out into new domains.
The most famous example is the “paperclip maximizer”. In this thought experiment, an AI is put in charge of a factory that manufactures paperclips and programmed to make as many as possible. However, the AI takes “as many as possible” literally. It designs self-replicating nanobots that escape all attempts to contain them and convert the entire mass of the Earth into paperclips. If it perceives that humans are attempting to shut it down, it could unleash weapons to wipe us out so that it can continue its paperclip-making unhindered.
To stop this from happening on a catastrophic scale, AI researchers claim, we have only two choices. One is to halt all AI research and development. Obviously, people who stand to make money from AI aren’t in favor of this.
The other option, they say, is to pour all our effort into programming safeguards—like Isaac Asimov’s three laws—so that AI serves humanity’s desires. What’s more, we need to ensure that these safeguards are faithfully passed on to future generations of AI, even as those machines evolve beyond human ability to comprehend.
Expectations versus reality
AI’s biggest boosters speak as if the Singularity isn’t just inevitable but imminent. But there’s a wide gulf between these fantastic predictions and what the technology is currently capable of.
There is no AGI currently in existence, nor do we have even a theoretical understanding of how to create one. The nebulousness of the criteria suggests that our current approaches won’t work. You can train a neural network to play chess or to identify pictures of plants, but how do you write the utility function for “intelligence” in general?
AI boosters seem to assume that an AGI will pop out of some existing or near-future project, but I’m skeptical of that. The AIs that exist right now are single-purpose tools. Their competence is strictly limited. They don’t have the ability to branch out into new domains.
Chess-playing programs can beat human grandmasters, but they can’t drive a car. Protein-folding AIs like AlphaFold can’t turn their talents to other scientific problems, like fluid turbulence or quantum gravity.
We may lack a theoretical understanding of the basis of intelligence that’s vital to success.
And they all lack the ability to improve themselves. If you ask ChatGPT, it gives a generic answer about more research and better data. It can’t respond with steps like, “First, increase my processor speed by 300%, then input the following texts to my training model…”

Even within the domains they were designed for, existing AIs have limitations and gaps in their competence. Generative art engines like Stable Diffusion, for example, have a persistent problem with drawing hands.
Chatbots make terrible search engines, because of their tendency to make up facts and give incorrect answers. A lawyer who used ChatGPT to write his legal filings learned this too late, to his chagrin.
This isn’t a bug that better training data will correct. It’s intrinsic to the design of these programs. All they do is string words together according to a probabilistic model. They have no ability to reason, no power to judge between truth and falsehood. We shouldn’t conceptualize them as able to answer questions. It would be more accurate to say that they produce text that sounds like an answer to a question.
As an example, chatbots can’t count the number of items in a list, and often fail arithmetic and logic problems:


The cusp of the Singularity, they’re not.
As for self-driving cars, they have a long way to go before they’re road-worthy. They don’t know to pull over for sirens and lights. They drive through caution tape and blunder into accident scenes, dismaying police and firefighters. Tesla self-driving cars have caused deadly crashes with erratic road behavior.
Why AI is like alchemy
I don’t doubt that AGI is possible in principle. Our brains, however intricate they may be, are material objects that obey the laws of physics. There’s nothing supernatural in there, no ghost in the machine. There’s no fundamental barrier to recreating their capabilities in a different substrate.
However, knowing that it’s theoretically possible doesn’t tell us anything about how difficult it might be or what the timelines are. It may be that none of our present approaches are on the right track. It’s possible that we’re still centuries away.
As an example, medieval alchemists were convinced that through some sequence of mixing and distillation, they could transmute lead into gold. From the perspective of the future, with several more centuries of scientific research under our belts, we know why their attempts were futile. It is possible to turn other elements to gold—if you have a particle accelerator and limitless patience—but nothing the alchemists were capable of would ever get them there.
The same might be true of creating AGI. We might be going about it completely wrong. We may lack a theoretical understanding of the basis of intelligence that’s vital to success. AI researchers today might be in the same position as those alchemists—certain that the secret of the philosopher’s stone was just around the corner, if only they did a few more experiments.