Agentic AI is coming to a bot near you

AI models will soon operate without human supervision. What, me worry?

That’s supposed to be Alfred E. Neuman, but Midjourney refused to draw him.

As part of my ongoing mission to depress and/or scare the crap out of my readers, I’d like to introduce a term you may not have heard yet, but is suddenly very much in vogue amongst the Geekerati: Agentic AI. 

Until a couple of weeks ago, I had not heard much about Agentic AI. Now everybody seems to be talking about it. How this happened so quickly, I haven't a clue. 

To me, agentic sounds like one of those new pharmacological concoctions designed to alleviate a minor physical ailment like bunions or hangnails but whose side effects may include nausea, schizophrenia, incontinence, or death. Agentic AI is not that. But it could potentially be worse than that. 

Secret agents, man

Agentic simply means that an AI model is making decisions and taking actions without asking for permission first. In practice, AI agents function a bit like the algorithms that already rule our digital lives, only these algos aren’t programmed ahead of time by human beings – they’re created on the fly by generative AI models, and they don’t need to be re-programmed to respond to novel situations. 

This is not in itself a bad thing. There are times when you absolutely want AI to be ‘agentic’ and take action on its own. For example, you don’t want your quasi-self-driving car to ask whether it’s ok to hit the brakes before you rear-end that station wagon full of nuns. (A side question: Why is it always a station wagon? Don’t nuns ever drive SUVs or minivans?) If your AI-powered health monitor senses that you are having a heart attack, I think you’d want it to proactively call the EMTs and alert your doctor. When Russian/Chinese/North Korean hackers are attacking your corporate network, you need your AI security software to sniff out that attack and automatically quarantine all traffic from those evil mofos. 

There are hundreds of other examples where it’s actually better when machines take actions independently — and some cases where it’s definitely not. The question becomes, who decides what we can trust the AI to do, and which decisions require adult supervision?

Pretty much all of the AI doomsday movies involve machines acting outside of human oversight, which is why the people who are developing these fantastically complex, mind-numbingly expensive AI models like to talk about keeping the “human in the loop.” 

The idea is that we don’t have to worry that machines will become self aware (and decide to rid the planet of its most troublesome primates) because there will be a person standing next to the kill switch, ready to head them off at the pass. 

Unless the person at the switch is this guy:

Or – worse – this guy:

They’ll take Manhattan

The incoming [insert your favorite dystopian adjectives here] administration has already made it clear that it plans to undo the fairly limited restrictions on AI that the current (sane) administration has enacted so far. [1] It’s also stocked with Tech Broligarchs who will happily absorb tens of billions of tax dollars to erect a new AI military industrial complex. 

Per a Washington Post report from last July:

Former president Donald Trump’s allies are drafting a sweeping AI executive order that would launch a series of “Manhattan Projects” to develop military technology and immediately review “unnecessary and burdensome regulations” — signaling how a potential second Trump administration may pursue AI policies favorable to Silicon Valley investors and companies….

Greater military investment in AI probably stands to benefit tech companies that already contract with the Pentagon, such as Anduril, Palantir and Scale. Key executives at those companies have supported Trump and have close ties to the GOP.  

Remember how the original Manhattan Project turned out?

The AI arms race will make the nuclear arms race look like a sack race at a Boy Scout Jamboree. And all the Silicon Valley Bros will be lining up at the money trough, yelling ‘Bring it on, my dudes.’

Ducks and cover

Last November, the US State Department issued a declaration about the responsible use of autonomous AI by the military, which has since been endorsed by 60 nation states. [2] The ten policy guidelines are fairly general and nonspecific, but the last one speaks to the heart of the matter:

States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example, by disengaging or deactivating deployed systems when such systems demonstrate unintended behavior.

In other words, there needs to be an adult in the room who can pull the plug before Skynet starts sending The Terminator back in time to kill Sarah Connor. But the odds of having a human in the loop when we really need one are about to get a whole lot worse. 

The US and its allies are already using AI to launch drone attacks in Ukraine and the Middle East, using what are called Lethal Autonomous Weapons Systems [3]. Right now the AI picks the targets and humans push the buttons. [4] What’s to stop the next collection of assclown sociopaths civilian appointees from letting robots call all the shots? 

Still with me? Have I depressed/terrified you sufficiently? Here’s a video of an adorable kitten+baby ducks as a palate cleanser. 

None of this is inevitable. But ensuring that AI agents are used for good and not evil starts by being aware of what these things are capable of, and who’s trying to profit from them — before Ahnold shows up, looking for Sarah. 

The Tynan Files will be taking a well deserved break over the holidays, returning in the new year. Share your favorite joke or egg nog recipe in the comments or email me: [email protected].

[1] This does not mean there won’t be legislation regulating AI; it means we’ll end up with a confusing jumble of state regulations that companies will have to sort through. But at the federal contractor level it will likely be an AI free for all.

[2] Countries not yet endorsing responsible AI use by the military include the People’s Republic of China, Russia, Iran, and North Korea. But Djibouti is on the list, so at least we got that going for us.

[3] Or LAWS, for short. Who says the military lacks a sense of humor?

[4] As Brianna Rosen points out in the Just Security blog, how AI currently selects who gets targeted is not at all transparent, and unlikely to get much clearer in the future. This will probably lead to more unintentional civilian casualties, with military leaders pointing the blame at the robots. 

Reply

or to participate.