Code, nerd culture and humor from Greg Knauss.

The thing I don’t get — OK, among the things I don’t get — is how anybody who is not filling out the back half of a pitch deck thinks that simply scaling up LLMs (large language models) is going to suddenly produce AGI (another goddamned imposter). There is simply no path from one to the other, unless you hand-wave hard enough to actually go airborne.

The options:

  • You believe that human consciousness is merely a complicated statistical model but don’t have an explanation for how it comes about.

Or:

Or:

  • You believe that there is a brain-deadening panic among the world’s monied tech and business leaders — each of whom is flat-out terrified of missing the Next Big Thing, now that their NFT bets haven’t pay off — and they will throw literally billions of dollars at anybody who will tell them what they need to hear.

Hm.

I know it’s been said a thousand times before, but what we have now is not AGI. There’s no reasoning in reasoning models, and there’s no intelligence in artificial intelligence. The software doesn’t understand anything it’s producing, but just burps up the next word that’s likely to be present in its training corpus given the context of all the previous words. Again and again and again.

I guess it’s kind of depressing that that can pass the Turing Test, but then so can Kevin Roose. You have to be willing to believe that the human you’re talking to is an obsequious, credulous suck-up. Which, yes, OK, isn’t a totally invalid assumption.

And, heck, plenty of people can’t reason either, and just repeat the same series of words that have provided them with money or sex in the past. It’s why I’m in therapy.

But presumably those same people don’t cost potentially trillions of dollars, at least until Elon Musk successfully directs the entire budget of the United States government at his own businesses and — the day after his net worth passes a thousand-billion — quote-tweets some rando’s fever dream with the word “True.”

The inherent limitations in statistical models like LLMs are an air-tight seal over their unlimited improvement. They can’t even theoretically evolve past stochastic parrot-hood to reach the lesser apes, like Joe Rogan and Andrew Tate.

So if your business requires that you continuously leap-frog last quarter’s big reveal, how to you get past the inconvenient fact that iteration is boring and most of the use-cases that people have imagined for the current generation of AI don’t actually, y’know, use-case all that well? How do you achieve even something as pedestrian as common sense?

Easy! You over-promise! Ta-da!

Just another few hundred billion — just another half-decade of money chasing a Next Big Thing that smells an awful lot like the past half-dozen Next Big Things — and we’ll be there! You can back the dump truck into Bay 6.

The troublesome fact that there’s no even theoretical route from the technology we have to the technology we want isn’t really an obstacle in the over-heated, over-wrought, over-excited environment we live in. Stapling “AI” onto your business plan gets you a ticket to the same dance that stapling “.com” onto it was three decades ago. My perpetual motion machine is only three or four years away, if I can get the funding.

Maybe there’s some miraculous technology we don’t know about yet. Maybe the singularity is brewing under someone’s desk. Maybe Roko’s basilisk is a sensible way to proceed. Maybe I shouldn’t have taken all that ayahuasca. Have you ever looked at your hand, man? I mean, really looked at your hand?

But near as I can tell, the entire AI industry is promising a second act that it hasn’t written yet, that it can’t write. They’ll crank the knobs and declare breakthroughs and hope that nobody notices that “artificial general intelligence” is just LLMs turned up to 11.

A screenshot of the amp from "This Is Spinal Tamp" with the dial turned to 11, but the "11" has been replaced by "AGI".

Hi there! My name's GREG KNAUSS and I like to make things.

Some of those things are software (like Romantimatic), Web sites (like the Webby-nominated Metababy and The American People) and stories (for Web sites like Suck and Fray, print magazines like Worth and Macworld, and books like "Things I Learned About My Dad" and "Rainy Day Fun and Games for Toddler and Total Bastard").

My e-mail address is greg@eod.com. I'd love to hear from you!

This site is powered by Movable Type. Spot graphics provided by Thomas, Michael and Peter Knauss.