The coolest thing about these new LLM’s is their ability to handle few-shot learning. Give it a few examples, and GPT-3 will extrapolate that out to whatever else you throw its way. There’s no need for hundreds of thousands of pieces of training data just to classify a paragraph’s sentiment as “positive” or “negative”.
It makes me think a generally-intelligent system capable of few-shot learning will replace almost all neural nets trained on insanely big data sets.
Take driving. All humans in the world can learn how to drive in under an hour with almost 0 training miles.1 Put any teenager behind the wheel, show them the gas pedal and then how the brakes work. Tell them which side of a road to stay on. Tell them not to hit other cars. Stop at red lights. Otherwise, go.
And off they go, with shockingly few issues.
Tesla Autopilot has now been trained on what, a trillion miles driven? And they’re still having issues with roundabouts? I’ve watched tons of those Cruise videos too. While they’re insanely impressive, they also seem to suffer from brittle edge cases.
Self-driving neural nets seem to need training data for every single possible driving scenario in order to properly inch into traffic, take turns, and stop for pedestrians. Sort of like how IBM Blue needed to ingest every chess game ever played in order to take down Garry Kasparov, Tesla autopilot is insatiable in its thirst for training data. And even though it’s drowning in data, Autopilot gets nervous and stuck all the time.
Will a trillion more training miles really help much at this point?
Prediction: the first real self-driving system will be trained on less than 100 miles of driving.
Instead of being fed a billion or a trillion miles, we’ll simply show the system the rules of the road and off it will go - just like how GPT-3 only requires 1-2 examples in order to accurately perform classification, completion, and data extraction.
Billions and trillions of miles trained not required.
I’ve previously written this about our narrow band of intelligence:
We’ve spent over three decades, millions of man-hours, and tens of billions of dollars trying to teach computers how to intelligently stay between the lines.
Yet, a few years before Alan Turing built the first computer, my grandfather was on a tomato field in rural Virginia. In two weeks, out of necessity, he figured out how to slip the red stickshift tractor-trailer into first gear, and then back to neutral. Into first gear again, and then back to neutral. Then all the way up to third gear and into town, to haul the tomatoes off. He was 11 years old.
Nearly any human who has tried to learn how to drive has been able to do so in a short amount of time. Over 70 years since Turing’s first machine, we still don’t have self-driving cars.
I was born in 1990. Two things have changed since then: Water fountains Lightbulbs A couple years ago, I landed in Albuquerque en route to Taos.