What we're getting wrong about AI, and about Humanity
We're making two mistakes at the same time...
If you’ve not been paying attention to the news, congratulations. If you have, then you can’t help but notice all of the noise and dust swirling around the idea of AGI - specifically, artificial general intelligence. Traditionally thought of as that point where the AI is as smart (or smarter) than humans, and indistinguishable from a human in terms of its ability to think and reason. There’s a constant chatter about when we’ll achieve AGI - but a growing cohort of people who believe we’re already there, as exemplified by this October 2023 article, “Artificial General Intelligence is Already Here.” Certainly LLMs have improved dramatically since then (but mostly with respect to cost, scalability, and training costs).
And I’ve spoken with a growing number of people who believe this assessment.
There are two mistakes that these people are making at the moment - either one, or the other, or both, to come to this conclusion, and I’m here to dispel them both.
The First Mistake
The first mistake, is over-estimating what LLMs are really doing.
Users of these LLM-based systems mistake them for human-like-intelligence or general intelligence because they don’t understand what LLMs do. Sometimes the people who say these things are even the CEOs of the companies that have written and produced LLM products (perhaps engaging in a bit of speculative marketing). Sam Altman has said, “We are now confident we know how to build AGI as we have traditionally understood it”, and has used words to describe OpenAI products that anthropomorphize what these products do, like “reasoning” and “thinking”. In a previous post I’ve explored the dangers of anthropomorphizing Artificial Intelligence:
What these people haven’t understood is that our AI techniques do not “think” in the way that humans think. These techniques are best described as automation: automating decisions; automating pattern matching calculations; automating the production of tokens (whether those tokens are “planning” tokens or “coding” tokens or “language” tokens… they’re just tokens). When an AI plays chess, it plays for the best heuristically scored move - unless a human has tinkered with the algorithm - and does not think in terms a human chess player might, of deciding to play Sicilian or Queen’s Gambit. There’s not a strategy per se, just a pattern match and an automation.
This is not to diminish how amazing these AI techniques are, and how they can improve our lives, automate things that never could be done by a computer program before, or automate things that used to be hard for humans to do manually. Quite simply these are amazing features and becoming more interesting and robust all the time. And this is not to take away from the concerns of safety - that many people hold - of the harms that AI software could inflict upon us. Those potential harms are real risks we confront - because AI doesn’t have to be generally intelligent to cause harm, it only has to be able to help humans to harm each other to be truly dangerous.
And yet these systems are not, as it were, sentient.
The fact that some people are fooled by LLMs into believing that they are sentient says as much about how gullible we are as a species, as it does about how effectively LLMs can mimic human writing patterns.
The Second Mistake
But there’s another mistake that I see people making, that can lead them to think AGI is already here. and it might surprise you.
The second mistake is under-estimating our fellow humans, and not really understanding what is going on under the hood with our companions on this planet.
An example of this mistake is called out in this article “Please call them AGIs, not LLMs” - which makes this tongue-in-cheek argument:
“I somehow feel uncomfortable about calling a system evaluated on such a large variety of diverse tasks just an LLM. It feels like that if we want to call such a general system an LLM just because it predicts words, then… We should also call humans LLMs?”
This author isn’t seriously suggesting that humans are only LLMs. But I’ve heard several people personally do so, and I’ve read many comments on social media discourse on the topic, that essentially boil down to the idea that humans are simply producing tokens like LLMs, and that humans don’t really have higher level thinking processes.
Another example, Will Kinney writes on X:
“The thing is that not that AIs are more conscious than we think they are, but that humans are actually a lot LESS conscious than they think. Probably at least 80% of what we think is free will is the application of post-hoc rationalization to things we did with no conscious thought at all.”
This isn’t cited to pick on Will - he has just stated clearly what I’ve heard from others - the idea that human cognition itself is a phantom, and not real. That we rationalize after the fact that we had a strategy or rational thought, but actually didn’t.
Never mind that this contradicts how most of us experience our own minds working every day. To some people, that’s all other people are - an LLM. Not fellow humans on a similar plane of existence, but flat, algorithmically shaped creatures without the protagonist’s ability to think more deeply.
It reflects a deeply pessimistic view of human intelligence and agency, and one which I would reject. I’m not sure how any father or mother - raising their children - could subscribe to this point of view when describing their own children.
This pessimistic view also reflects a lack of empathy in the protagonist - unable to feel what others feel, to experience what they experience. I can’t help but think that it might also reflect that our society, just a few years removed from covid-induced lockdowns, perhaps still interacting with our fellow humans too often via text and video chats and not often enough in-person. The impersonal connections might be implicitly causing us to downgrade our estimation of our fellow humans.
We have to push back against this form of nihilism.
How do we net it out?
First, we need to accept a few things, and I’ll use an abstract medical example:
While an LLM can automate a response to diagnostic data that includes a medical diagnosis… the LLM does not, in fact, understand the human body, physiology, biology, medicine, or life. It is, simply, regurgitating an automated series of tokens based on what it has been trained on. While that response might be accurate, it does not reflect a fundamental understanding of these topics.
A doctor might have those same diagnostic data and come to an incorrect conclusion, but also has at their disposal a fulsome understanding of the real world and how it behaves, which can allow them to test their hypothesis and treat a patient successfully.
This is not to say that in the future, we couldn’t build an AI model that *does* have a useful understanding of biology or physiology or other concepts. But that is not correct for LLMs.
The current state of LLMs and the near-future state holds a lot of exciting innovation - this isn’t a reason to be pessimistic about what we can do with AI - one of the most interesting forms of automation we’ve ever devised.
There’s been research already that shows LLMs don’t reason in the way that humans do. This should be enough to convince us that people are not LLMs, and LLMs are not Artificial General Intelligence (AGI).
I hope we can all give our fellow humans a bit more credit for what is going on between their ears, while also appreciating the magic of AI for what it is - amazing software, and not intelligence, per se.
It shouldn’t be hard to agree: LLMs are not people, and people are not LLMs.