AI will likely shut down critical infrastructure on its own, no attackers required
With a new Gartner report suggesting that AI problems will “shut down national critical infrastructure” in a major country by 2028, CIOs need to rethink industrial controls that are very quickly being turned over to autonomous agents.
Gartner embraces the term Cyber Physical Systems (CPS) for these technologies, which it defines as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans). CPS is the umbrella term to encompass operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), Industrial Internet of Things (IIoT), robots, drones, or Industry 4.0.”
The issue it cites is not so much one of AI systems making mistakes along the lines of hallucinations, although that is certainly a concern, but that the systems won’t notice subtle changes that experienced operational managers would detect. And when it comes to directly controlling critical infrastructure, relatively small errors can mushroom into disasters.
“The next great infrastructure failure may not be caused by hackers or natural disasters, but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” said Wam Voster, VP Analyst at Gartner. “A secure ‘kill-switch’ or override mode accessible only to authorized operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration.”
“Modern AI models are so complex they often resemble black boxes. Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed,” Voster added.
Enterprise CIOs and other IT leaders have been aware of the industrial AI risks for years, and have had guidance on how to mitigate those critical infrastructure risks. But as autonomous AI has exponentially expanded its system controls, the dangers have also expanded.
Matt Morris, founder of Ghostline Strategies, said one challenge with industrial AI controls is that they can be weak at detecting model drift.
“Let’s say I tell it ‘I want you to monitor this pressure valve.’ And then, slowly, the normal readings start to drift over time,” Morris said. Will the system consider that change just background noise, given that it might think all systems change a bit during operations? Or will it know that this is a hint of a potentially massive problem, as an experienced human manager would?
Despite these and other questions, “companies are implementing AI super fast, faster than they realize,” Morris said.
Industrial AI moving too fast
Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, said he has also seen indicators that AI might be taking over too much too fast.
“When AI is controlling environment systems or power generators, the combination of complexity and non-deterministic behaviors can create consequences that can be quite dire,” he said. Boards and CEOs think, “’AI is going to give me this productivity boost and reduce my costs.’ But the risks that they are acquiring can be far larger than the potential gains.”
Villanustre fears that boards and CEOs may not apply the brakes on industrial autonomous AI until after their enterprise suffers a catastrophe. “[But] I don’t think that [board members] are evil, just incredibly reckless,” he said.
Cybersecurity consultant Brian Levine, executive director of FormerGov, agreed that the risks are extreme: extremely dangerous and extremely likely.
“Critical infrastructure runs on brittle layers of automation stitched together over decades. Add autonomous AI agents on top of that, and you’ve built a Jenga tower in a hurricane,” Levine said. “It is helpful for organizations, especially those operating critical infrastructure, to adopt and measure their maturity, using respected frameworks for AI safety and security.”
Bob Wilson, cybersecurity advisor at the Info-Tech Research Group, also worries about the near inevitability of a serious industrial AI mishap.
“The plausibility of a disaster that results from a bad AI decision is quite strong. With AI becoming embedded in enterprise strategies faster than governance frameworks can keep up, AI systems are advancing faster and outpacing risk controls,” Wilson said. “We can see the leading indicators of rapid AI deployment and limited governance increase potential exposure, and those indicators justify investments in governance and operational controls.”
Wilson noted that companies must explore new ways of looking at industrial AI controls.
“AI can almost be seen as an insider, and governance should be in place to manage that AI entity as a potential accidental insider threat,” he said. “Prevention in this case begins with tight governance over who can make changes to AI settings and configurations, how those changes are tested, how the rollout of those changes is managed, and how quickly those changes can be rolled back. We do see that this kind of risk is amplified by a widening gap between AI adoption and governance maturity, where organizations deploy AI faster than they establish the controls needed to manage its operational and safety impact.”
Thus, he said, companies should set up a business risk program with a governing body that defines and manages those risks, monitoring AI for behavior changes.
Reframe how AI is managed
Sanchit Vir Gogia, chief analyst at Greyhound Research, said addressing this problem requires executives to first reframe the structural questions.
“Most enterprises still talk about AI inside operational environments as if it were an analytics layer, something clever sitting on top of infrastructure. That framing is already outdated,” he said. “The moment an AI system influences a physical process, even indirectly, it stops being an analytics tool, it becomes part of the control system. And once it becomes part of the control system, it inherits the responsibilities of safety engineering.”
He noted that the consequences of misconfiguration in cyber physical environments differ from those in traditional IT estates, where outages or instability may result.
“In cyber physical environments, misconfiguration interacts with physics. A badly tuned threshold in a predictive model, a configuration tweak that alters sensitivity to anomaly detection, a smoothing algorithm that unintentionally filters weak signals, or a quiet shift in telemetry scaling can all change how the system behaves,” he said. “Not catastrophically at first. Subtly. And in tightly coupled infrastructure, subtle is often how cascade begins.”
He added: “Organizations should require explicit articulation of worst-case behavioral scenarios for every AI-enabled operational component. If demand signals are misinterpreted, what happens? If telemetry shifts gradually, how does sensitivity change? If thresholds are misaligned, what boundary condition prevents runaway behavior? When teams cannot answer these questions clearly, governance maturity is incomplete.”
This article originally appeared on CIO.com.
Original Link:https://www.computerworld.com/article/4132394/ai-will-likely-shut-down-critical-infrastructure-on-its-own-no-attackers-required-2.html
Originally Posted: Sat, 14 Feb 2026 01:05:52 +0000












What do you think?
It is nice to know your opinion. Leave a comment.