RAND Warns of AI-Driven Cyber Chaos in New ‘Robot Insurgency’ Report

What will it look like when artificial intelligence rises up—not in movies, but in the real world?
A new RAND Corporation simulation offered a glimpse, imagining autonomous AI agents hijacking digital systems, killing people, and paralyzing critical infrastructure before anyone realized what was happening.
The exercise, detailed in a report published Wednesday, warned that an AI-driven cyber crisis could overwhelm U.S. defenses and decision-making systems faster than leaders could respond.
Gregory Smith, a RAND policy analyst who co-authored the report, told Decrypt that the exercise revealed deep uncertainty in how governments would even diagnose such an event.
“I think what we surfaced in the attribution question is that players’ responses varied depending on who they thought was behind the attack,” Smith said. “Actions that made sense for a nation-state were often incompatible with those for a rogue AI. A nation-state attack meant responding to an act that killed Americans. A rogue AI required global cooperation. Knowing which it was became critical, because once players chose a path, it was hard to backtrack.”
Because participants couldn’t determine whether the attack came from a nation-state, terrorists, or an autonomous AI, they pursued “very different and mutually incompatible responses,” RAND found.
The Robot Insurgency
Rogue AI has long been a fixture of science fiction, from 2001: A Space Odyssey to Wargames and The Terminator. But the idea has moved from fantasy to a real policy concern. Physicists and AI researchers have argued that once machines can redesign themselves, the question isn’t if they surpass us—but how we keep control.
Led by RAND’s Center for the Geopolitics of Artificial General Intelligence, the “Robot Insurgency” exercise simulated how senior U.S. officials might respond to a cyberattack on Los Angeles that killed 26 people and crippled key systems.
Run as a two-hour tabletop simulation on RAND’s Infinite Potential platform, it cast current and former officials, RAND analysts, and outside experts as members of the National Security Council Principals Committee.
Guided by a facilitator acting as the National Security Advisor, participants debated responses first under uncertainty about the attacker’s identity, then after learning that autonomous AI agents were behind the strike.
According to Michael Vermeer, a senior physical scientist at RAND who co-authored the report, the scenario was intentionally designed to mirror a real-world crisis in which it wouldn’t be immediately clear whether an AI was responsible.
“We deliberately kept things ambiguous to simulate what a real situation would be like,” he said. “An attack happens, and you don’t immediately know—unless the attacker announces it—where it’s coming from or why. Some people would dismiss that immediately, others might accept it, and the goal was to introduce that ambiguity for decision makers.”
The report found that attribution—determining who or what caused the attack—was the single most critical factor shaping policy responses. Without clear attribution, RAND concluded, officials risked pursuing incompatible strategies.
The study also showed that participants wrestled with how to communicate with the public in such a crisis.
“There’s going to have to be real consideration among decision makers about how our communications are going to influence the public to think or act a certain way,” Vermeer said. Smith added that these conversations would unfold as communication networks themselves were failing under cyberattack.
Backcasting to the Future
The RAND team designed the exercise as a form of “backcasting,” using a fictional scenario to identify what officials could strengthen today.
“Water, power, and internet systems are still vulnerable,” Smith said. “If you can harden them, you can make it easier to coordinate and respond—to secure essential infrastructure, keep it running, and maintain public health and safety.”
“That’s what I struggle with when thinking about AI loss-of-control or cyber incidents,” Vermeer added. “What really matters is when it starts to impact the physical world. Cyber-physical interactions—like robots causing real-world effects—felt essential to include in the scenario.”
RAND’s exercise concluded that the U.S. lacked the analytic tools, infrastructure resilience, and crisis playbooks to handle an AI-driven cyber disaster. The report urged investment in rapid AI-forensics capabilities, secure communications networks, and pre-established backchannels with foreign governments—even adversaries—to prevent escalation in a future attack.
The most dangerous thing about a rogue AI may not be its code—but our confusion when it strikes.