AI vs. AGI vs. Consciousness vs. Super-intelligence vs. Agency
GPT-4 surpasses all sane definitions of “Artificial General Intelligence.”
AI (without the “G”) is a fancy way of saying machine learning - finding patterns within giant datasets in order to solve a single problem. E.g. analyzing billions of interstate driving miles to build Autopilot, billions of pictures to classify faces, or billions of minutes of audio to transcribe speech into text.
None of these AI (without the “G”) tools show human-like intelligence capable of accomplishing a bunch of different tasks. They are programmed to do one thing, and they can only do that one thing.
But GPT-4 can do lots of things.
It can carry a conversation, even if the chat format is “just” a parlor trick. It can write poems, brainstorm names, edit paragraphs, outline essays, and make jokes. It’s a Tower of Babel, capable of translating and answering questions across languages. It can solve math problems while showing its work. It can program Python apps, fix its own coding mistakes, suggest medical diagnoses, and help doctors communicate devastating news to patients. It can even learn how to do stuff by extrapolating from something as small as a single example.
GPT-4 has nearly aced both the LSAT and the MCAT. It’s a coding companion, an emotional companion, and to many, a friend. Yet it wasn’t programmed to be a test taker or a copywriter or a programmer. It was just programmed to be a stochastic parrot.
This is general intelligence.
When people say GPT-4 isn’t AGI, I think what they really mean is AGI is not conscious, GPT-4 is not super-intelligent, and GPT-4 does not have agency.
I agree.
So we are here on the progression of artificial intelligence:
✅ AI (single-problem machine learning)
✅ AGI (seemingly anything you throw at it)
🚧 Conscious AGI
🚧 Super-intelligent AGI
🚧 AGI with agency
Conscious AGI
As much as the goalposts have moved for AGI, the goalposts will probably move even more for “Conscious AGI,” mainly because we don’t have a clear definition of consciousness.
Richard Dawkins has the best explanation of consciousness I’ve heard. He explains how brains developed the ability to make predictions in order to help their bodies survive. Is that rock going to hit me? If I move, will the prey see me? If I steal that food, will I be chased? Our brains are incredibly efficient prediction machines, constantly modeling the world around us and stack-ranking probabilities in realtime.
Once the model turns inward and begins modeling itself by predicting its own reactions, then consciousness arises. If I steal that food, how will it make me feel?
In The Selfish Gene, he makes the somewhat obvious observation that consciousness isn’t all-or-nothing. Some animals have more levels of consciousness than others. I think we intuitively know this, which is why we don’t grant human rights to dogs, even though we know dogs are conscious.
Considering LLM’s are really good at modeling and predicting (they’re complex language prediction machines after all), they’re standing right at the ledge of consciousness, one step away from falling down the hole of modeling themselves. How big of a leap is it from asking Will this response satisfy the request to Will this response satisfy myself?
Add the fact that consciousness is a spectrum1 — and that species with far less developed brains seem to have some degree of consciousness — I think it’s likely LLM’s develop a level of consciousness before they become super-intelligent.
What we don’t know is how we’re going to figure out if the LLM is conscious or not.2 I haven’t heard any good ideas, mainly because I haven’t heard any consensus on what consciousness even is.
Super-intelligent AGI
Super-intelligent AGI is AGI that far surpasses even the best humans in a field.
Even if GPT-4 aces every test ever given to humans, that’s merely matching the very best of humans. To give ourselves credit, we’re a fairly smart bunch.
To be considered super-intelligent, AGI needs to contribute meaningfully to human knowledge. It needs to create a new important programming language, discover a new drug, generate new ideas, and write new stories and screenplays.
Diagnosing medical conditions the way GPT-4 has done is impressive, but a doctor who isn’t distracted by an onslaught of patients can also properly diagnose diseases.
A super-intelligent AGI would recognize new diseases, categorize symptoms in a new way, invent new words and theorems and finally explain subatomic entanglement to us.
That’s super-intelligent.
AGI with agency
Agency is actually the final unknown. Almost all AGI doomsayers assume AGI will have agency. They have this vision of the machine deciding it’s time to end civilization.
They might be right.
But just because something is super-intelligent or is conscious doesn’t mean it has agency. There are lots of hyper-conscious, hyper-intelligent humans stuck inside PhD programs unable to change course. (Kidding of course.)
Some humans — despite being fairly intelligent and fairly conscious — display tiny amounts of agency, barely able to alter their goals, living conditions, or diets. Others decide to drive motorbikes off ramps and over the Grand Canyon.
How does agency arise, and is it something wholly separate from consciousness?
I have no idea.
But I think we’ll see a conscious, super-intelligent AGI before we’ll see one with determined agency.
- Of course this means AGI might eventually become far more conscious than all humans other than Buddha have ever been.↩︎
- I like what Sam Altman said on the Lex Fridman podcast, that we can tell if a model is conscious by being very careful to not feed the model any mention of or description of consciousness during its training, and then seeing if the model is able to describe or identify with the idea of consciousness. HN Comment↩︎