Why the OpenAI soap opera matters

What looks like a stupid tech spat may determine how quickly AI becomes self aware

Two robots flinging mashed potatoes at each other. Source: Midjourney.

Well, that was fun.

After a lot of high drama (and millions of words written by folks like me) we seem to be back to where we started. The same guy (Sam Altman) is running the same company (OpenAI) that is driving at breakneck speed toward the Technology That Will Change Life As We Know It (TTWCLAWKI).

For those of you who have real lives and didn't follow this story very closely, here's the tl;dr: OpenAI's board fired CEO Sam Altman. Investors and employees revolted. Board rehired Sam Altman and fired itself (mostly). Everyone then ate too much and fell asleep on the couch.

But other disturbing stuff churned up in this story that's worth spending a little more time exploring.

The Q* Memo

About a day after Altman was restored to the throne (and most of the board members who voted to depose him were sent to the The Nights Watch [1]), Reuters published a story detailing a memo that was sent to the board of directors "warning of a powerful artificial intelligence discovery that they said could threaten humanity." [2]

That new AI model went by the codename Q* (Q-star). Per the Reuters report:

Some at OpenAI believe Q* could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI).... Given vast computing resources, the new model was able to solve certain mathematical problems...

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.

Imagine you are teaching a talented 5 year old how to play the piano. One day you teach her how to play chopsticks. About a week later you wake up and hear her playing Rachmaninov's Second Piano Concerto. That's AGI. [3]

I have questions. First, what were they doing naming this thing Q*? Has no one on the staff at OpenAI seen Star Trek: The Next Generation?

Actor John de Lancie as "Q," an extra-dimensional being of unknown origin who's also an enormous asshole. Source: CinemaBlend.

Very little is known about Q* besides the fact that it can do simple math problems that it hasn't been trained to solve — implying that it can think and learn independently. Have we seen this movie before? I think we have. It usually doesn't end well for us carbon-based life forms.

OK Doomers

The other notable thing that emerged from this OpenAI mishegas was the culture war hiding inside it.

There are apparently two warring camps in the world of advanced AI. One wants to proceed with caution, while the other is like, 'Hell nah — let's see just how fast this puppy can go.' And, like everything else on the Internet, it has devolved into a food fight.

On one hand, we have the "effective accelerationists," or e/acc. These are essentially tech bros who've made their millions/billions and want to make even more. (Or they're fans of said tech bros — like all those Elon sycophants who seem to think he actually is Tony Stark.) On the other hand we have decelerationists, who think that maybe we ought to stop and think before we put the AI pedal to the metal. They are sometimes called “effective altruists” but also derisively known as "doomers" or "decels."

There's a lot of sneering and name calling from the e/acc crowd about the altruists. Like this guy, for example:

You know you can trust someone who post screeds under a fictional character's name with Robert Redford as his avatar. Source: Infinite Scroll.

It feels a little like two guys on the deck of the Titanic arguing about whether icebergs are real. One says maybe we ought to crank the throttle back and look around, and the other one wants to throw all the lifeboats overboard and use the extra room to put in a casino.

My belief is that the side that resorts to name calling usually lacks a cogent argument and just wants to bully their opponents into silence. And if that doesn't work, they start issuing threats. That's where we are now with just about everything. AI is no different, save for the fact that — like our worsening climate crisis — it could represent an existential risk to the future of humanity. Other than that, no big deal.

When money talks, ethics walk

It's helpful to remember that OpenAI was founded as a nonprofit to develop artificial intelligence technology in a safe and moral way. Period, full stop. And then they realized that creating TTWCLAWKI requires a massive amount of computing power, electricity, water, and some very smart highly paid people. They went to Uncle Satya at Microsoft for a little extra cash to fund their research [4]. At that point they spun off a "capped-profit" company that lives underneath the nonprofit and is now worth approximately $86 billion, give or take a few million.

The board that fired Altman was apparently trying to live up to that mission. They got outvoted by Microsoft, other investment firms looking to make a killing on OpenAI stocks if/when the company goes public, and employees possibly looking to cash in their shares. Now that board is gone and a new one is in place, full of tech bros.

The question is, which side of the accel/decel argument are they on? The future of humanity may depend on it.

Lifeboats or a casino? Please share your thoughts in the comments below.

[1] That's a Game of Thrones reference, for you non-GOT nerds.

[2] Casey Newton of The Platformer seems pretty confident that the board did not receive that memo and that it did not play a part in Altman's firing. Which is kind of like saying 'Actually King Kong climbed to the top of the Chrysler Building, not the Empire State'. It's still a big fucking gorilla.

[3] There are a lot of people talking/writing about how AGI is either not possible or decades away from being a reality. It's important to remember that 99.9% of these people don't know enough about the topic to make that claim. The remaining 0.01% are pretty much evenly split.

[4] $1 billion at first, now $13 billion — most of it in credits for Azure cloud computing services that haven't been spent yet. So if Altman had not been rehired by OpenAI, and had ended up working at Microsoft (along with nearly all of the OpenAI's staff), MSFT would have effectively acquired the company for virtually nothing. It would also not be required to honor the original mission of safe/ethical AI, though I think they (probably) would.

Reply

or to participate.