- The Tynan Files
- Posts
- Can AI cure your crazy conspiracy-obsessed relatives? Maybe.
Can AI cure your crazy conspiracy-obsessed relatives? Maybe.
Chatbots seem able to do what the rest of us cannot: Drag our loved ones back to reality
The truth is out there, but THEY don’t want you to know it. Source: Midjourney.
One of the rules I try to live by is never argue with a fool, a drunk, or a lunatic, because within less than a minute you will also start to sound like a fool, a drunk, or a lunatic. (A corollary to this maxim is if you spend more than 15 minutes on any social media platform, you will inevitably encounter at least one of these people.)
The same goes for your crazy conspiracy-obsessed friends and relatives. It is a complete waste of time trying to convince them that a) humans really did land on the moon; b) 9/11 was not an inside job; or c) Bill Gates is not tracking your movements via molecule-sized computer chips embedded within the Covid-19 vaccine.
The very act of debating these topics elevates them to something worthy of discussion, further entrenching them in the minds of true believers [1]. It's a hopeless, endlessly frustrating task. At least, until you ask AI to do it for you.
Facts cut a hole in us
A new research paper indicates that maybe, just maybe, a chatbot can do better than humans in talking these people down from their respective trees. Researchers at MIT and Cornell gathered 1500 people, all of whom were at least 50 percent certain of at least one common conspiracy theory, and engaged them in a serious discussion with an AI chatbot.
The 15 conspiracy theories they tested included some of the genre's all-time classics. Besides the three I noted above, they included the idea that Princess Diana's death was not an accident; the US military recovered the wreckage of an alien spacecraft at Rosswell, New Mexico; and the Coca Cola company deliberately botched the debut of New Coke to drive up demand for its existing formula. [2]
After engaging with a chatbot (GPT-4) for less than 10 minutes, conspiracy theorists were 20 percent less likely to believe that the US government was warned about the attack on Pearl Harbor but let it happen anyway, or that it is using Area 51 to store the remains of an extraterrestrial who crash landed on earth, and so on. [3]
Here's the most surprising part: The most effective approaches involved using evidence and arguments, not emotional appeals or an attempt to find common ground. Per the report:
Strikingly, reasoning-based strategies were clearly the most frequently used approach: evidence-based alternative perspectives were used “extensively” in a large majority of conversations (83%) and encouraging critical thinking was either used “extensively” (47%) or used “moderately” in virtually all conversations (52%). Conversely, the rapport-building strategies of finding common ground and expressing understanding were used only “moderately” in most conversations, and other strategies (including various psychological and social/emotional strategies) were used even less. These descriptive results suggest that the AI was largely being persuasive due to actual use of evidence and arguments to change people’s minds.
Glory be.
Even more amazing: Researchers went back and queried their test subjects two months later, and their new, slightly more skeptical beliefs, still held. And their belief in other conspiracy theories they weren't tested on also got a little shakier.
But the report warns that this is a double-edged sword. If gen AI chatbots can turn people toward the light of reason, they can also be used to send them cartwheeling into the bottomless pit of batshit crazy.
Absent appropriate guardrails, it is entirely possible that such models could also convince people to believe conspiracy theories, or adopt other epistemically suspect beliefs – or be used as tools of large-scale persuasion more generally. Thus, our findings emphasize both the potential positive impacts of generative AI when deployed responsibly, and the crucial importance of minimizing opportunities for this technology to be used irresponsibly.
Thus explaining the urgency of certain parties attempting to create"anti-woke" chatbots such as Gab AI and whatever Elon is cooking up over in that fever swamp at 10th and Market in San Francisco.
The island of doubt is like the taste of medicine
I think there are some obvious caveats here. One is that the people who agreed to participate in this experiment were clearly open to having their minds changed; I doubt that can be broadly extrapolated to the general population.
Another is that these people were dealing with an algorithm. I think we implicitly trust machines to be nonjudgmental in a way human beings cannot. A chatbot can't look at you like you've lost your mind, were dropped on your head as an infant, clearly subsist on a diet of lead paint and formaldehyde, or any of the other things that go through your head as your crazy uncle goes on at Thanksgiving dinner about how government agents are living in the crawlspace under his double-wide.
Chatbots also have instant access to more factual resources than you or I might draw upon after consuming a few glasses of pinot and an entire turkey leg. So they have some natural advantages, is what I'm saying.
All the way with JFK (jr). Source: Midjourney.
Maybe it's time to sit some of our loved ones down in front of ChatGPT for a lesson in uncomfortable truths. (I'm thinking in particular of a certain relative of mine who seems to believe JFK Jr is alive and well and running for president in 2028.)
Maybe there is hope for the human race after all.
Crosseyed and brainless
I have a theory about why conspiracy theories are so appealing. They reduce complex events to a simple narrative. No ambiguity, no annoying loose ends or contradictory pieces of evidence to wrestle to the ground. The (usually anonymous, always all-powerful) villains have been exposed. They lend a sense of order to the universe, that things happen for a reason, even if it's a nefarious one. They're satisfying in a way the actual truth seldom is.
And who knows? Maybe we're all wrong, and JFK Jr will miraculously emerge from a 25-year dirt nap to lead us to a new golden age. A dessicated corpse is probably still preferable to his own crazy, conspiracy-spouting uncle.
Still waiting? Comment below or email me: crankyolddan at gmail dot com.
[1] Not all that dissimilar to debating Nazis on Xitter or Substack. No matter how thoroughly you demolish any arguments they might spew, you still end up with more people thinking, "Hmm, maybe those Jew-hating m*therf*ckers are onto something."
[2] Regardless, I will go to my grave believing Lee Harvey Oswald could not possibly have acted alone on November 22, 1963. That was Conspiracy No. 9 in the researchers' list of whacko theories. I am willing to die on that grassy knoll.
[3] I'm kind of on board with the Area 51 thing, too. The truth is out there, I am at least 57.2 percent certain.
Reply