Does AI have a soul?

If so, humans will have a whole lot of 'splainin' to do.

Star Trek's Lt. Commander Data channeling Sherlock Holmes. Source: Slashfilm.

Many years ago I was on a flight returning from three months stumbling around Europe with a backpack and an Indiana Jones hat. (What can I say? It was the '90s.) Sitting next to me was a rabbi, dressed in full rabbinical garb, including the side curls and the hat (not an Indiana Jones hat). It was a long flight, and we talked almost the entire length of it.

At one point he turns to me dramatically and says in a nasally voice: "Pop quiz: Do you have a soul?"

I thought about this for a minute. I mean, he's not just any rabbi; he's a rabbi who specializes in comforting the dying and telling them that it's OK to let go. Basically, his job is talking people out of their bodies. [1]

The guy is clearly connected. Maybe he knows the plane is going to crash. Maybe these are my last moments on Earth (or, technically, above it). In any case, it's an important question.

"Yes," I say, finally, with as much confidence as I can muster.

"WRONG!" he says, startling the passengers in the next row. "You are a soul. You have a body."

No, it was not Mel Brooks, I swear. Source: BackwardSiris on Tumblr.

This encounter has shaped my thinking ever since. I'm not religious by any stretch, but I do think that something survives the death of the body, whatever that is. Life force, energy, chi, consciousness; pick your favorite synonym.

And if I have a soul, then animals certainly have souls. [2] So do insects, plants, and trees. Rocks, water, and dirt I'm not so sure about. But mostly, there's lots of soul to go around.

So why not artificial intelligence? Can machines have souls? If, as some believe, consciousness is an emergent property of the brain, why can't it be an emergent property of advanced AI?

I know this idea seems silly to some. But it's actually a serious question, one that scientists, ethicists, and religious philosophers have been grappling with for a while now.

Ghosts in the machines

This little trip down memory lane was inspired by an article by David Brin in Wired titled, "Give Every AI a Soul — or Else." Brin's argument is not so much that AI models have souls as much as we need to assign them souls.

It's a long and somewhat convoluted article, but I would summarize it thusly:

1) AI is too sophisticated for humans to regulate, and moratoriums or other calls for stopping development are not going to fly.

2) You need AI in order to regulate AI — kind of like a robot-on-robot caged match.

3) These systems need to be held accountable for their actions or all hell will break loose.

4) To hold an AI system accountable you need to be able to find it, ie, give it a physical location — what Brin calls a "Soul Kernal," stored inside some piece of hardware — and to record all of its actions in the blockchain. [3]

Is that sci-fi enough for you? It makes my head hurt just thinking about it.

None of this implies that these systems will continue to exist in some ethereal metaphysical space once you pull the plug. But it does imply that AI systems will become independent entities, functioning in society alongside us puny humans. And that, to some extent, they will have free will.

Soul survivors

The question of whether machines can have a soul has been a staple of science fiction since the word "robot" was first coined in Karel Capek's 1920 play, "Rossums Universal Robots (R.U.R.)." That play doesn't turn out very well for team homo sapien, by the way.

The robots in R.U.R. attacking the humans who built them. Source: Wikipedia.

Star Trek's Data, Westworld's Dolores, Ex Machina's Ava, Marvin the Paranoid Android, Futurama's Bender -- the list of fictional machines with human-like properties goes on and on. Whether they possess souls (or vice versa) is an inevitable plot point.

When asked in 2013 whether computers could have a soul, AI pioneer Marvin Minsky [4] had this to say:

Why not? What humans have is a more complex and larger brain than any other animal (maybe a whale’s brain is physically large, but it’s not structurally more complex than ours). If you left a computer by itself, or a community of them together, they would try to figure out where they came from and what they are. If they came across a book about computer science they would laugh and say “that can’t be right.” And then you’d have different groups of computers with different ideas.

Great. That's just what the world needs: Computers arguing with each other about religion.

I'm not going to pretend that I have any answers. But it's clear that the machines we're building will soon have an opinion of their own on this topic. The good news (for me, anyway) is that by the time AI becomes sentient enough that we have to grapple with this question, I will probably be dead.

And then I'll get to find out if the rabbi was right.

Do you think machines can have a soul? And if so, would they listen to soul music? Post your thoughts below and feel free to share this posts with any other beings, sentient or otherwise.

[1] Something the members of his synagogue hated him for, he admitted. The families of the dying don't want their loved ones to leave. I imagine seeing him show up was a bit like noticing the Grim Reaper waiting on your doorstep, sharpening his scythe.

[2] Anyone who's ever looked a dog in the eyes knows that much. Cats, maybe not.

[3] Per Brin: "This approach—demanding that AIs maintain a physically addressable kernel locus in a specific piece of hardware memory—could have flaws. Still, it is enforceable, despite slowness of regulation .... Because humans and institutions and friendly AIs can ping for ID kernel verification—and refuse to do business with those who don’t verify."

[4] When Minsky was asked in 1970 if he thought computers would one day exterminate humanity, he famously replied, "If we're lucky, they might decide to keep us as pets." Quite the cut up, that Minsky.

Reply

or to participate.