AI vs. AGI vs. Consciousness vs. Super-intelligence vs. Agency

GPT-4 surpasses all sane definitions of Artificial General Intelligence.”

AI (without the G”) is a fancy way of saying machine learning - finding patterns within giant datasets in order to solve a single problem. E.g. analyzing billions of interstate driving miles to build Autopilot, billions of pictures to classify faces, or billions of minutes of audio to transcribe speech into text.

None of these AI (without the G”) tools show human-like intelligence capable of accomplishing a bunch of different tasks. They are programmed to do one thing, and they can only do that one thing.

But GPT-4 can do lots of things.

It can carry a conversation, even if the chat format is just” a parlor trick. It can write poems, brainstorm names, edit paragraphs, outline essays, and make jokes. It’s a Tower of Babel, capable of translating and answering questions across languages. It can solve math problems while showing its work. It can program Python apps, fix its own coding mistakes, suggest medical diagnoses, and help doctors communicate devastating news to patients. It can even learn how to do stuff by extrapolating from something as small as a single example.

GPT-4 has nearly aced both the LSAT and the MCAT. It’s a coding companion, an emotional companion, and to many, a friend. Yet it wasn’t programmed to be a test taker or a copywriter or a programmer. It was just programmed to be a stochastic parrot.

This is general intelligence.

When people say GPT-4 isn’t AGI, I think what they really mean is AGI is not conscious, GPT-4 is not super-intelligent, and GPT-4 does not have agency.

I agree.

So we are here on the progression of artificial intelligence:

AI (single-problem machine learning)
AGI (seemingly anything you throw at it)
🚧 Conscious AGI
🚧 Super-intelligent AGI
🚧 AGI with agency

Conscious AGI

As much as the goalposts have moved for AGI, the goalposts will probably move even more for Conscious AGI,” mainly because we don’t have a clear definition of consciousness.

Richard Dawkins has the best explanation of consciousness I’ve heard. He explains how brains developed the ability to make predictions in order to help their bodies survive. Is that rock going to hit me? If I move, will the prey see me? If I steal that food, will I be chased? Our brains are incredibly efficient prediction machines, constantly modeling the world around us and stack-ranking probabilities in realtime.

Once the model turns inward and begins modeling itself by predicting its own reactions, then consciousness arises. If I steal that food, how will it make me feel?

In The Selfish Gene, he makes the somewhat obvious observation that consciousness isn’t all-or-nothing. Some animals have more levels of consciousness than others. I think we intuitively know this, which is why we don’t grant human rights to dogs, even though we know dogs are conscious.

Considering LLMs are really good at modeling and predicting (they’re complex language prediction machines after all), they’re standing right at the ledge of consciousness, one step away from falling down the hole of modeling themselves. How big of a leap is it from asking Will this response satisfy the request to Will this response satisfy myself?

Add the fact that consciousness is a spectrum1 — and that species with far less developed brains seem to have some degree of consciousness — I think it’s likely LLMs develop a level of consciousness before they become super-intelligent.

What we don’t know is how we’re going to figure out if the LLM is conscious or not.2 I haven’t heard any good ideas, mainly because I haven’t heard any consensus on what consciousness even is.

Super-intelligent AGI

Super-intelligent AGI is AGI that far surpasses even the best humans in a field.

Even if GPT-4 aces every test ever given to humans, that’s merely matching the very best of humans. To give ourselves credit, we’re a fairly smart bunch.

To be considered super-intelligent, AGI needs to contribute meaningfully to human knowledge. It needs to create a new important programming language, discover a new drug, generate new ideas, and write new stories and screenplays.

Diagnosing medical conditions the way GPT-4 has done is impressive, but a doctor who isn’t distracted by an onslaught of patients can also properly diagnose diseases.

A super-intelligent AGI would recognize new diseases, categorize symptoms in a new way, invent new words and theorems and finally explain subatomic entanglement to us.

That’s super-intelligent.

AGI with agency

Agency is actually the final unknown. Almost all AGI doomsayers assume AGI will have agency. They have this vision of the machine deciding it’s time to end civilization.

They might be right.

But just because something is super-intelligent or is conscious doesn’t mean it has agency. There are lots of hyper-conscious, hyper-intelligent humans stuck inside PhD programs unable to change course. (Kidding of course.)

Some humans — despite being fairly intelligent and fairly conscious — display tiny amounts of agency, barely able to alter their goals, living conditions, or diets. Others decide to drive motorbikes off ramps and over the Grand Canyon.

How does agency arise, and is it something wholly separate from consciousness?

I have no idea.

But I think we’ll see a conscious, super-intelligent AGI before we’ll see one with determined agency.


  1. Of course this means AGI might eventually become far more conscious than all humans other than Buddha have ever been.↩︎

  2. I like what Sam Altman said on the Lex Fridman podcast, that we can tell if a model is conscious by being very careful to not feed the model any mention of or description of consciousness during its training, and then seeing if the model is able to describe or identify with the idea of consciousness. HN Comment↩︎

Fragile passengers

I tolerate longer lines when buying coffee than I do when going through the airport. Why?

I got a push notification at 5:15 this morning on the dot: Your Lyft has arrived. Gabe will wait for 5 minutes.” Right on time.

We crossed town, and then crossed the Williamsburg Bridge. As we rode up 278, we passed a car flipped upside-down going the other way. Traffic back into the city was miles long. Not for us, though. We got to LGA in 26 minutes.

A screen posted in front of the CLEAR PreCheck line said it was a 10 minute wait. 15 minutes for normal PreCheck. 5 minutes for no PreCheck.

I decided 5 minutes wasn’t worth taking my shoes off, so I stuck with the CLEAR line. The guy behind me mouthed off to a CLEAR employee, This line’s so long, what’s the point of paying all this money?!” He was told to go through the normal security line. And let’s be honest, Amex paid his membership fee.

Soon it was my turn to scan my eyes. Random ID Check” flashed on the screen. I started getting annoyed as I dug into my bag for my wallet.

After the TSA agent scanned my ID, I loaded my bags into the X-Ray and walked right through the metal detector. BEEP BEEP BEEP.” Another random check.

Am I on a security watchlist?!?!

As I stood waiting for three elderly women to get their hip implants manually scanned, I started bouncing up and down, watching my bag as it sat at the far end of the conveyor belt.

I started doing my best dad-looking-for-the-nowhere-to-be-found-waiter impression, craning my neck as I dramatically looked around for more TSA agents.

Nobody came. So I waited about three minutes, got a pat-down, then was reunited with my bag. A couple minutes after that, I had a steaming mug of Americano and a view of planes taking off inside the brand new Terminal C SkyClub.

Travel is so seamless now.

Instant chauffeur pickup via an app. Iris scanners to speed up security. A ticket that loads into Apple Wallet. Push notifications for delays. Lounges filled with free food and coffee while I wait to board a $100 million piece of equipment, maintained by an army of mechanics, piloted by an ever-rotating crew of employees, which will fly me hundreds and thousands of miles away. Usually on-time. Often early.

And yet there’s this pervasive frustration passengers have the moment they enter a terminal.

I somehow expect shorter lines and fewer delays from Delta Airlines than I do from the local Italian spot. A restaurant can bump my reservation, tell me they’re running an hour late, charge me $100/pp, and I’ll still recommend the experience to a friend.

But a flight that costs half of a dinner with wine in NYC? If they don’t get me there 10 minutes early, you’ll never hear the end of it.

Will training data matter anymore for self-driving cars?

The coolest thing about these new LLMs is their ability to handle few-shot learning. Give it a few examples, and GPT-3 will extrapolate that out to whatever else you throw its way. There’s no need for hundreds of thousands of pieces of training data just to classify a paragraph’s sentiment as positive” or negative”.

It makes me think a generally-intelligent system capable of few-shot learning will replace almost all neural nets trained on insanely big data sets.

Take driving. All humans in the world can learn how to drive in under an hour with almost 0 training miles.1 Put any teenager behind the wheel, show them the gas pedal and then how the brakes work. Tell them which side of a road to stay on. Tell them not to hit other cars. Stop at red lights. Otherwise, go.

And off they go, with shockingly few issues.

Tesla Autopilot has now been trained on what, a trillion miles driven? And they’re still having issues with roundabouts? I’ve watched tons of those Cruise videos too. While they’re insanely impressive, they also seem to suffer from brittle edge cases.

Self-driving neural nets seem to need training data for every single possible driving scenario in order to properly inch into traffic, take turns, and stop for pedestrians. Sort of like how IBM Blue needed to ingest every chess game ever played in order to take down Garry Kasparov, Tesla autopilot is insatiable in its thirst for training data. And even though it’s drowning in data, Autopilot gets nervous and stuck all the time.

Will a trillion more training miles really help much at this point?

Prediction: the first real self-driving system will be trained on less than 100 miles of driving.

Instead of being fed a billion or a trillion miles, we’ll simply show the system the rules of the road and off it will go - just like how GPT-3 only requires 1-2 examples in order to accurately perform classification, completion, and data extraction.

Billions and trillions of miles trained not required.


  1. I’ve previously written this about our narrow band of intelligence:

    We’ve spent over three decades, millions of man-hours, and tens of billions of dollars trying to teach computers how to intelligently stay between the lines.

    Yet, a few years before Alan Turing built the first computer, my grandfather was on a tomato field in rural Virginia. In two weeks, out of necessity, he figured out how to slip the red stickshift tractor-trailer into first gear, and then back to neutral. Into first gear again, and then back to neutral. Then all the way up to third gear and into town, to haul the tomatoes off. He was 11 years old.

    Nearly any human who has tried to learn how to drive has been able to do so in a short amount of time. Over 70 years since Turing’s first machine, we still don’t have self-driving cars.

    ↩︎

Two things have changed since 1990

I was born in 1990. Two things have changed since then:

  1. Water fountains
  2. Lightbulbs

A couple years ago, I landed in Albuquerque en route to Taos. The jet bridge from the Southwest plane to the terminal was a time machine to the 1990’s. Every store logo had that brutalist Seinfeld font aesthetic. The wallpaper was a blue-splash pattern like those old coffee cups, the windows were small, the brick was multicolored, and there wasn’t a water fountain in sight.

It took me back to elementary school, standing in line after gym, waiting for my turn at the dinky metal drink fountain. I’d push my entire body weight against the panel hoping for just a dribble of water.

Now all of those dinky fountains have add-ons sitting on top, which fill bottles at a torrential pace. And everybody seems to carry a bottle everywhere they go. When did this bottle craze begin?

The only change bigger than the water bottle has been the lightbulb. I dropped a plastic IKEA LED bulb yesterday. It hit the floor. Nothing happened.

I still remember my dad slicing his hand as he tried to catch a falling glass bulb. He probably juggled it because it was insanely hot after taking it out of its socket. The radiant heat from those bulbs made reading in the summer a tortuous event.

I do miss the startling pop” you’d hear every once in a while, when the little wire inside the bulb would burn to a crisp and leave a black burn mark on the glass.

There are lots of things I don’t remember. I don’t remember NFL games being grainy. I don’t remember how all the movie trailers had that corny deep voice explaining In A World…”. I don’t remember packages taking a long time to ship. I don’t remember wanting to listen to a song, but not being able to.

Everything seems the same in retrospect. Everything except the fact that humans were camels, and all homes were lit with a warm yellow glow.

What’s in and out for 2023

I made this list on New Year’s Eve. So far, so good.

In

  • 1pm-9pm eating window
  • Bill’s Pizza Night every week
  • Daily writing
  • AI
  • Sunday tea
  • Fewer todos
  • Short sprints instead of long jogs

Out

  • Scrolling in bed
  • Scrolling in general
  • Fake deadlines
  • Deli meat

My experience Wednesday in SF

It’s pouring rain.

I take the Bart to Fidi. Everybody on the train is wearing a mask. As we roll to a stop at Embarcadero, the train loses power. We wait for the generator to come on so we can open the doors.”

Finally outside, I pass 10 people on the three-block walk to 345 California. Half are our unhoused friends.”

The Cafe-X coffee robot tent at the corner of Pine is no longer there. Somebody is bundled up, sleeping in its place.

I roll into Industrious at 9:15am. All of the bagels from the breakfast spread are gone. This isn’t San Francisco’s fault! Although since most private offices are empty, I’m not sure how I missed the rush.

I get some third wave drip coffee and begin my morning scroll.

I learn Facebook is laying off 13% of their employees. Just down Market Street, Twitter is embroiled in an internal war - payroll, servers, and debt payments all competing over declining ad revenue.

Worst of all, it turns out the guy with the curly hair plastered on every billboard around town is a total scam who rug pulled every FTX customer, creating an $8 billion hole in his balance sheet and sending crypto into a free fall. Bitcoin is down to $14k, not far off its price when I left San Francisco in August of 2020.

I take a Lyft to dinner. We get stuck behind two ambulances on Polk. As we idle, watching a woman get loaded onto a stretcher, I realize nine years ago I was picked up for my first-ever Lyft ride just a block away. I tell my driver about this personal history, about how magical it felt to order a car at the push of a button, how a Honda Accord adorned with a pink mustache pulled up to the curb, how I donated the suggested $8 to be driven to a bar near AT&T Oracle Park.

He asks me how much Lyft is charging me for this ride. $22.50.

He grunts. They quoted me $9.

I get to the bar where I’m meeting two of our investors. One has just come from a CEO roundtable. He decides not to drink because he has to wake up early now.” Everybody is planning layoffs, he explains. People are burning millions a month. Capital markets don’t exist. I don’t think half of these people are going to have businesses in a year.

A couple hours later, I schlep back to Oakland. Lyft quotes me $0.50 more than Uber, so I flip a mental coin and go for the Uber. Boy does my gamble pay off. I’m matched with a Model 3.

In bed by 12:30, awake four hours later, and hungover from my three glasses of wine, it’s time to catch my Delta Main Cabin 5.5 hour flight back to NYC.

Page 1 of 29 Older →