Our general understanding of what it means to be intelligent is unfit for purpose.
No one knows what it is or how to measure it.
How do you know how intelligent you are? Is intelligence about amassing information or knowing what to do with the information you have? Is intelligence genetic, or can you learn it?
How would you measure intelligence? Would you frame a series of questions you know you’ll smash because you have the necessary life experience to win? Is that reasonable? Is it fair? Should “fair” have anything to do with a measure of intelligence?
The most popular and enduring method of measuring intelligence in humans is called Intelligence Quotient (IQ). A high IQ score is taken to indicate intelligence.
However, IQ tests are influenced by a number of factors outside the test’s parameters such as race and social status. However, the suggestion that black people or poor people are inherently less intelligent is infra dignitatem. Let’s say a study showed that Africans have disability-level intelligence. Whether you see this as a problem with Africans or with IQ tests tells us something about you. (Don’t worry. That study has been dismantled by many researchers.)
IQ tests ostensibly have a high correlation with success, but only when you define “success” as income. Even then, location, inherited wealth, race, and schooling are more important factors. The tests are even less reliable for predicting academic success. They completely ignore “motivation, persistence, self-control” and many other factors.
In fact, they only measure technical aptitude in a number of well-defined, narrow areas. They ignore creativity, practical ability, and more nebulous concepts like morality or integrity. It takes a certain kind of intelligence, for instance, for someone to make a table. This might not be reflected in his vocabulary.
“An apple costs 10 cents at the local market. How much would you pay for 50 apples?”
Only a fool would say $5. For fifty apples, you could definitely bounce it down to $4.75. You know it. I know it. Everyone knows it—except for that test.
The same person can get wildly varying results from an IQ test over relatively short periods of time. If your result depends on factors outside the test then we might not need a test at all.
I’ve taken a number of different IQ tests, both in person and online, and I’ve had results from 80 (idiot) to 160 (genius). I took an online test just now and got 135 (clever). What does that mean? In my personal life, I have never heard anyone talk about their IQ unless they were bragging. To borrow the old advice for writers, when it comes to intelligence, show, don’t tell.
Accordingly, the IQ test decides what intelligence is without consulting you, and then correlates high scores with positive hits on the thing it just made up. There is a very real chance that a high IQ demonstrates nothing more than an ability to do well in IQ tests.
If human intelligence is impossible to measure and might be meaningless, what can we learn from artificial intelligence (AI)?
The most popular and enduring heuristic for measuring artificial intelligence is called the Turing Test. The idea is that computers are considered “intelligent” when, through the process of normal conversation, a program can be mistaken for a human.
The central problem remains. If there is no way to reliably determine what human intelligence is, there can be no way to meaningfully simulate it. Even if there was, the Turing Test would still be a bad idea.
The Turing tests claims that we can consider computers intelligent when they successfully imitate human responses. As it turns out, this means fairly banal prompts in completely controlled environments. Nevertheless, conceptualizing those responses entirely through the medium of discrete uses of language seems arrogant. Evolution has optimized human intelligence for meat and neurons. That would be nonsensical for computers. It would be like Anglophones learning French through Chinese. If a computer intelligence ever emerges, we may not even recognize it.
Regardless, the tendency to describe our cognitive ability using information technology is self-defeating in modeling artificial intelligence. Generally, if you model Y using X, and the only metaphor you have to understand X is Y, then you’re really modeling Y in terms of Y. This explains nothing.
Although AI language programs are improving, they ignore how language actually works. Language is not a framework of grammar into which you slot vocabulary. Admittedly, it can seem like that if you’ve ever had to learn a foreign language.
If we don’t understand how language works, it seems unreasonable to expect anyone to be able to artificially simulate it. Until that never-going-to-happen time, there is no reason to suspect any progress in this particular area.
How I define intelligence
So, after all that, there is no way to define or accurately assess intelligence, and technology can’t help. All you can say with any certainty is that you’re better or worse than someone else at a particular thing. I can read Latin and you can’t. You can play Moonlight Sonata and I can’t. You can make friends easily and it’s difficult for me. We can describe specific situations but we can’t extrapolate anything from those descriptions.
If I were to attempt to define intelligence, it would be something like a messy soup of apprehension, discernment, insight, close-enough solutions, judgment, preferences, and passions. Is there any way to reliably measure any of that? And what would we do with that information?
My definition of intelligence presumably skews heavily towards my personal attributes, or how I perceive myself. It’s very tempting to imagine that “intelligent” means “people who think like me” or “people who have come to the same conclusions as me”, even if there’s no evidence to support that conclusion. I succumb to that temptation all the time. So do you.