Yes, AI will destroy us all one day. What's for lunch?

Nothing like a strongly worded letter to stop a trillion dollar industry dead in its tracks.

No humans were harmed in the making of this image. Source: Midjourney.

This week, yet another group of concerned scientists have signed yet another letter warning the world about the potentially catastrophic dangers of AI, if left unchecked. And like every letter before that has warned us, it leaves us all a little... bored. We've binged this show already. What's new for season two?

The dangers are so theoretical, and the warnings so vague, as to be useless.

It's a little like living in California. Yes, we know The Big One is coming one day, and it's certain to have devastating consequences — buildings will tumble, bridges will snap, and the San Francisco waterfront will start shimmying like a belly dancer with a bumble bee in her bloomers.

But knowing this, and feeling it deep in one's gut, are two distinctly different things.

I've probably experienced half a dozen 'serious' earthquakes while living in the Golden State. The biggest one, Loma Prieta in 1989 [1], killed 63 people, caused $6 billion in damage, and blacked out the city for three nights. It also scared the shit out of my cats. I remember that day extremely well. But it doesn't make me nervous about living in California. I rarely even think about it. [2]

Even The San Andreas Fault's Twitter account isn't taking things too seriously.

Who knew geology porn was a thing?

What we need is an AI earthquake — a little one, but something that shows the potential for danger — to shake us up, metaphorically speaking. Something that illustrates the danger these very smart people clearly see, but seem unable to articulate.

The Skynet is falling, the Skynet is falling

In an earlier post I listed eight major threats that AI poses — only one of them truly existential. The most likely downside most people will experience is being denied something (a job, a mortgage, insurance, etc) they would otherwise qualify for, and nobody with two legs can say why. Another is mistaken identity, especially when it comes at the hands of law enforcement. [3] A third, very likely consequence, is AI supercharging the Surveillance Industrial Complex.

In this case, The Big One is allowing AI to control major systems, such as air defense or the power grid, and make decisions with little or no human intervention. That could get messy fast.

The closest example I can think of occurred during the Reagan era, at the height of nuclear tensions between the US and Soviet Union.

On September 26, 1983, the USSR's early warning satellite system detected the launch of five nuclear-tipped ICBMs aimed straight at Mother Russia. A lieutenant colonel in the Soviet Air Defense Force named Stanislav Petrov had five minutes to decide whether to launch a counter attack. Per Petrov's obit in the New York Times:

Colonel Petrov was at a pivotal point in the decision-making chain. His superiors at the warning-system headquarters reported to the general staff of the Soviet military, which would consult with Mr. Andropov on launching a retaliatory attack.

After five nerve-racking minutes — electronic maps and screens were flashing as he held a phone in one hand and an intercom in the other, trying to absorb streams of incoming information — Colonel Petrov decided that the launch reports were probably a false alarm.

As he later explained, it was a gut decision, at best a “50-50” guess, based on his distrust of the early-warning system and the relative paucity of missiles that were launched.

So that's how close we came to likely nuclear annihilation. Had that system been automated with AI... well, I think we've all seen that movie. [4]

The existential threat

This is why every important system whose operation is controlled by AI needs a Human In The Loop in charge of the final decision. (I would call it a Human In The Loop Emergency Response, but that's a rather unfortunate acronym.) We need a Colonel Petrov.

Of course, we could end up with somebody who thinks it's a swell idea to drop an H bomb into the middle of a hurricane. [5] So maybe we need more than one Petrov.

A very weird and disturbing video from one of my favorite bands. You’re welcome.

Yes, AI regulations are coming. Hopefully they will include restrictions on the kinds of systems that can run on auto-pilot. But such rules are only useful if everyone in the world agrees to follow them. Which is why AI needs to be treated the way we treat nuclear weapons, with global agreements, multinational oversight, and both companies and countries willing to give up competitive advantages (and potentially billions in revenue) for the good of all.

It's gonna take more than a letter to make that happen. But I suppose it could be a start.

On a scale of one to ten, how scared of AI are you, really? How about earthquakes? Tell us what’s shakin’ in the comments below, and be sure to share this cheery post with all your apocalyptic-minded friends.

[1] AKA, the World Series earthquake.

[2] At least, until the earth starts shaking again. And then I'm like, 'Should I stand in the doorway or hide under the table? Where's my phone? Where are my keys? Where's the cat? Do I have time to eat something before the house collapses?' And by then it's over, and like every other Californian within 50 miles of the epicenter I'm playing the Richter Scale game: "I'm thinkin' 4.4, maybe a 4.7. Definitely not a 5."

[3] Is it any surprise that the 3+ false arrests due to biased AI all occurred to Black men? Facial recognition algorithms are notoriously unreliable for people of color, but it doesn't stop cops from using them.

[4] Ironically, War Games was released just three months before the US/USSR nuclear near miss. Whoever says that life imitating art is a good thing is an asshole.

[5] Person Woman Man Camera KA-BOOM!

Reply

or to participate.