Now Reading: Agentic AI exposes what we’re doing wrong

Loading
svg

Agentic AI exposes what we’re doing wrong

NewsJanuary 23, 2026Artifice Prime
svg13

We’ve spent the last decade telling ourselves that cloud computing is mostly a tool problem. Pick a provider, standardize a landing zone, automate deployments, and you’re “modern.” Agentic AI makes that comforting narrative fall apart because it behaves less like a traditional application and more like a continuously operating software workforce that can plan, decide, act, and iterate.

Agentic AI has changed cloud computing, but not in the way the hype machine wants you to believe. It hasn’t magically replaced engineering, nor has it made architecture irrelevant. It has made weak architecture, fuzzy governance, and sloppy cost controls impossible to ignore. If you are already running cloud with strong disciplines, agentic AI is an accelerant. If you aren’t, it’s a stress test you will fail, publicly and expensively.

Agentic AI is an AI system that can autonomously plan and execute multistep actions toward a goal, often by using tools and services in its environment. That’s the key difference from “chat”: An agent doesn’t just recommend what to do; it can actually do it, repeatedly, at machine speed, and it will keep doing it until you stop it or constrain it properly.

In cloud terms, AI can become a key cloud actor: provisioning resources, calling APIs, moving data, modifying configurations, opening tickets, triggering workflows, and chaining services. This means the cloud now supports autonomous decision loops, which have failure modes distinct from those of web apps with fixed request/response paths.

Adapting cloud networks to agentic AI

Traditional cloud networking assumptions are already shaky: perimeter thinking, coarse segmentation, and “allow lists” that grow without limits. Agentic AI makes those patterns actively dangerous because agents don’t just talk to one back end and one database; they discover, orchestrate, and pivot across systems as part of normal operations. The network becomes a dynamic substrate for tool use rather than a static map of application tiers.

What needs to change is the level of precision and adaptability in network controls. You need networking that supports fine-grained segmentation, short-lived connectivity, and policies that can be continuously evaluated rather than set once and forgotten. You also need to treat east-west traffic visibility as a core requirement because agents will generate many internal calls that look legitimate unless you understand intent, identity, and context.

Finally, plan for bursty, unpredictable communication patterns. Agents will fan out, call many endpoints, retry aggressively, and trigger cascades across regions and services if you let them. That pushes you toward stronger service-to-service policy, more transparent egress governance, and tighter coupling between networking telemetry and runtime controls, so you can see and stop pathological behavior before it becomes an outage or a bill.

Aligning with identity-based security

Cloud security has shifted toward identity for years, and agentic AI completes the move. When the user is an autonomous agent, control relies solely on identity: what the agent is, its permitted actions, what it can impersonate, and what it can delegate. Network location and static IP-based trust weaken when actions are initiated by software that can run anywhere, scale instantly, and change execution paths.

This is where many enterprises will stumble. They’ll give an agent broad API permissions to be helpful, then act surprised when it becomes broadly dangerous. The correct posture is to treat every agent as a privileged workload until proven otherwise because agents are effectively operators with superhuman speed. You need explicit identities for agents, tight authorization boundaries, short-lived credentials, audited tool access, and strong separation between environments and duties.

Identity-based security requires us to clearly define who did what, making it difficult to overlook details. If an agent modifies infrastructure, moves data, or grants access, you need to trace the action back to a specific identity, under a given policy, with an approval or constraint chain. Governance isn’t optional; it’s the essential control framework for autonomous operations.

Cloud finops and cultural shifts

If you think cloud bills were unpredictable before, wait until you unleash systems that can decide to use more resources in pursuit of a goal. Agentic AI changes how cloud resources are leveraged by making consumption far more elastic, exploratory, and continuous. Agents will spin up ephemeral environments, run iterative experiments, call paid APIs, generate and store large artifacts, and repeat tasks until they converge—sometimes without a natural stopping point.

The old finops playbook of tagging, showback, and monthly optimization is not enough on its own. You need near-real-time cost visibility and automated guardrails that stop waste as it happens, because “later” can mean “after the budget is gone.” Put differently, the unit economics of agentic systems must be designed, measured, and controlled like any other production system, ideally more aggressively because the feedback loop is faster.

There’s also a cultural shift here that many leaders will resist. If you cannot answer, in plain language, what value you’re getting per unit of agent activity—per workflow, per resolved ticket, or per customer outcome—then you don’t have an AI strategy; you have a spending strategy. Agentic AI will punish organizations that treat the cloud as an infinite sandbox and success metrics as a slide deck.

Good architecture is crucial

The industry’s favorite myth is that architecture slows innovation. In reality, architecture prevents innovation from turning into entropy. Agentic AI accelerates entropy by generating more actions, integrations, permissions, data movement, and operational variability than human-driven systems typically do.

Planning for agentic AI systems means designing boundaries agents cannot cross, defining tool contracts they must obey, and creating “safe failure” modes that degrade gracefully rather than improvising into catastrophe. It also means thinking through data architecture with new seriousness: where context comes from, how it is governed, how it is retained, and how to prevent agents from leaking sensitive information through perfectly reasonable tool usage. You’re not just building an app; you’re building an autonomous operating model that happens to be implemented in software.

Good architecture here is pragmatic, not academic. It focuses on reference patterns, standardized environments, consistent identity and policy enforcement, deterministic workflows when possible, and explicit exception paths when autonomy is allowed. Most importantly, it recognizes an agent as a new runtime consumer of your cloud platform, requiring a design approach that accounts for this instead of adding agents to old assumptions.

Discipline and responsibility

Agentic AI has raised the operational bar for responsible cloud computing. Networking must become more policy-driven and observable to support autonomous, tool-using traffic patterns. Security must become more identity-centric because the actor is now software that can operate like a human administrator. Finops must transition into real-time governance because agents can consume resources at machine speed. Architecture must lead, not follow, because the cost of unplanned autonomy is failure at scale.

If you want the blunt takeaway, it’s this: Agentic AI makes cloud discipline non-negotiable. The organizations that treat it as an architectural and operational shift will do well, and the ones that treat it as another feature to turn on will learn quickly how expensive improvisation can be.

Original Link:https://www.infoworld.com/article/4120858/agentic-ai-exposes-what-were-doing-wrong.html
Originally Posted: Fri, 23 Jan 2026 09:00:00 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    Agentic AI exposes what we’re doing wrong

Quick Navigation