Are We Giving AI Nuclear Weapons?
We've been noting with some concern that the U.S. Military is developing military hardware controlled in significant ways by artificial intelligence (AI).
The most obvious are the unmanned flying weapons we know as "drones". Among the latest is the U.S. Air Force's YFQ-42, which carries medium range air-to-air missiles and incorporates artificial intelligence to operate autonomously -- meaning, it makes decisions without human input.
But the military has escalated quickly from drones.
This year, the 180-foot long, 240 ton battleship USX-1 Defiant set sail. And - as we noted in a previous post - it's not only unmanned, it doesn't even have a place for a human to stand. It's a robot attack ship with artificial intelligence allowing it to autonomously complete naval combat missions.
Then there's the YFQ-42A and 44A: Air Force fighters with no pilots, just artificial intelligence on board to guide them. More powerful than drones, these are full-fledged fighter jets made to fly alongside the F-22 and F-35, armed with missiles and capable of air-to-air combat.
These unmanned craft are not one-off experiments. The goal is to launch entire fleets of AI-driven aircraft and battleships. The intention is a good one: taking humans out of harms way, letting machines do the fighting -- hard to argue with that. So why the concern?
AI is lightening fast and mind-blowing in its abilities. But - as pointed out in a May 2024 study from MIT called "Large Language Models Seem Miraculous, but Science Abhors Miracles" - no one actually understand how it works. Add to that, A.I. hallucinates, a non-threatening way of saying it makes mistakes. According to the MIT paper, "this suggests the need for additional research and great caution in deploying such models for critical applications."
Military combat seems like a critical application. And the one area critical to keep under human control is our nuclear ballistic missile command, control and communications (NC3). The United States maintains more than 400 Minuteman III intercontinental ballistic missiles, which can deliver nuclear warheads to an enemy 6,000 miles away. A single missile hitting a city would cause catastrophic damage. Imagine a massive fireball with temperatures in the millions of degrees vaporizing everything in ten square miles. The shockwave from the blast would destroy buildings miles away, and bring hurricane-force winds flinging cars around like paper trash. Finally, there's the radiation, causing horrible death and disease for generations. An ICBM is a Chernobyl-level disaster delivered with a push of a button.
And this is the point: a human has to push a button. Actually, two officers have to turn two keys and operate two switches simultaneously -- and from two physically separate consoles -- for a launch signal to reach the missile. There is no A.I. in the loop.
Except the Air Force wants to change that.
The hot topic at the Air & Space Forces Association's 2025 Warfare Symposium was using AI to support NC3 systems. According to Major General Ty Neuman, "If we don't think about AI, and we don't consider AI, then we're going to lose, and I'm not interested in losing."
Because our enemies will be taking advantage of AI's processing speed, the feeling among our military leadership is, we can't be left behind. Said Neuman, "AI has to be part of what the next generation NC3 [architecture] is going to look like. There's going to be so much data out there, and with digital architectures, resilient architectures, and things like that, we have to take advantage of the speed at which we can process data."
Everyone insists it will be humans turning the switch. But they have no qualms about AI being a key part of the decision-making process. The general explained, "a human operator will not have the ability to determine what is the most secure and safest pathway, because there's going to be, you know, signals going in 100 different directions. Some may be compromised. Some may not be compromised. (A human) will not be able to determine that, so AI has to be part of that."
Space Force Col. Ryan Rose agreed, saying, "I think it's important to push the boundaries of AI." Rose acknowledged the "challenges and risks" of bringing AI into the NC3 operation, but added, "I think that with robust testing, validation, and implementing oversight mechanisms, I think we can find a way to mitigate some of those risks and challenges, and ultimately deliver AI systems that operate as they're intended."
Air Force Lt. Gen. Andrew Gebara insisted "There will always be that human in the loop." But if the information that humans act on is generated by artificial intelligence, what does it matter?
We asked Chat GPT if AI still has problems with hallucinations and mistakes. The actual answer, straight from AI: "Yes, LLMs like me are still prone to hallucinations and can make mistakes with basic math, though these issues are improving."
Chat GPT went on to give examples of hallucinations:
"In AI, a hallucination is when the model confidently gives a wrong or made-up answer like: Inventing a quote or source, Giving a false historical fact, Misstating a scientific concept."
Can't say for sure if that answer was even true. But it's not very comforting, contemplating a system where AI is a key part of the decision to launch an ICBM.