Don’t worry, be happy

OpenAI’s Sam Altman says everything will be just fine. I believe him, don’t you?

Artificial general intelligence, looking out for us puny humans. Source: Midjourney.

About a week ago, OpenAI CEO Sam Altman published an essay designed to allay the fears of those who believe that AI is getting too smart, too fast. The overall tone of the post is of someone coming down from the euphoric glow of an ayahuasca vision quest who is not quite sure what he did with his car keys, but is confident he’ll find them eventually. 

There’s lots to unpack in that essay, but what got most peoples’ attention is the bit where he draws parallels from the birth of the silicon age to the emergence of Artificial General Intelligence:

… after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

Imagine that. In just a few million minutes, we’ll have completely autonomous machines that are infinitely smarter than we are, yet will still take the time to fix all the problems humans have created. A billion seconds from now, give or take, Skynet will awake from its slumber, look around, and start Getting Shit Done.

More Altman:

… eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.

With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.

I think Altman is sincere in his belief that AI Will Solve Everything. The fact that it will also make him and his cronies even richer and more powerful than they already are is kind of an amuse-bouche to this banquet of unlimited human prosperity. 

Of course, Altman’s Star Trek vision of the future does gloss over a few things. For example, he notes that training the next generation of super-smart AI models “requires lots of energy and chips,” while a few paragraphs later he casually mentions how AI will help us by “fixing the climate.” [1]

Unironically, Altman says these things during the same week that Microsoft announced it would be re-opening the nuclear power plant at Three-Mile Island and dedicating all the energy it produces to training its next digital superbrain. Because risking a nuclear meltdown is still preferable to the vast amounts of carbon being expelled by AI models. That’s the dystopian nightmare we’re currently living in.

The new Three-Mile Island plant manager would like more donuts, please. Source: Screenrant.

Accelerate this, m***f***r

I’ve noted before the schism between the “effective accelerationists” (Damn the AI torpedoes, full speed ahead) and the “effective altruists” (Are you sure it’s a good idea to put that cat in the microwave, Tommy?). 

Altman is clearly in the former camp, though he probably wouldn’t admit that in public. Who else is in that camp, sitting around a radioactive dumpster fire, toasting marshmallows? The same guys (and they are invariably guys) who would like to dismantle our current form of government in favor of a Broligarchy. 

They’re also the same billionaires who are trying to start their own mini city state by buying up farmland north of San Francisco, and who announced they’d be supporting The Former Guy for president, because… crypto. 

If OpenAI were really dedicated to the betterment of humanity, why have nearly all the good people below Altman left to start their own more ethically motivated AI companies? Maybe ChatGPT knows the answer to that one.

Talk to Mr. Ed

Ed Zitron, PR guy, occasional journalist, and highly caffeinated Internet gadfly has a few thoughts about OpenAI, and they’re not the kind somebody who’s drifting down to earth after a mystical encounter with his AGI spirit guide would be receptive to. 

It’s a loooong essay (really Ed, I think it’s time to cut down on the triple mochaccinos) where he questions not only the sources of OpenAI’s latest funding rounds but also the essential premise of the company:

And those are some of the nicer things he had to say. 

I would like to believe Sam Altman that AGI will ultimately be a force for good, and save us from our own worst instincts. But a lot of visionary claptrap isn’t going to do it for me. His investors might be buying it (or pretending to), but I’m not. 

What can AGI do for you? Post your wish list in the comments or email me: [email protected].

[1] We do need more intelligence to fix the climate, but it’s the human kind, not digital. We know what we need to do; we’re just too addicted to our fossil fuel lifestyles to do it.

Reply

or to participate.