Now Reading: How to avoid the risks of rapidly deploying AI agents

Loading
svg

How to avoid the risks of rapidly deploying AI agents

NewsAugust 26, 2025Artifice Prime
svg3

In a recent poll of technology executives, 92% said they expected to increase AI spending over the next year. Half expected more than 50% of their organization’s AI deployments to be autonomous within the next 24 months.

These AI investments include machine learning, private large language models, AI agents, and more autonomous agentic AI capabilities.

“The rise of agentic AI means organizations need to make their systems ‘agent-ready’ to successfully use this technology while also managing potential risks,” says Raj Sharma, global managing partner for growth and innovation at EY.

AI agents are not software with well-defined inputs and deterministic outputs. Their functions include connecting language models with enterprise data sources and integrating them with workflows. Many organizations are using agents embedded in CRM, ERP, and other employee workflow systems, while some businesses are experimenting with AI customer experiences.

The speed of deployment is concerning to many experts, who fear haste makes waste, creates security risks, and may result in mounting technical debt. In another survey, 82% of cloud professionals agreed that AI was fueling cloud complexity and spending, and 45% said they were not sufficiently optimizing AI-related cloud usage.

“Enterprises rushing to deploy AI agents, especially via public LLMs, risk ethical lapses, biased outputs, data exposure, regulatory breaches, and wasted spend with little to no business outcomes or value,” says Raj Balasundaram, global VP of AI innovations for customers at Verint. “Common missteps include pushing unvetted models into production, exposing sensitive data to third-party platforms, and lacking observability to track performance, fairness, or compliance.”

The question of speed to market versus resiliency has been an issue with every disruptive technology that’s rapidly moved from early adoption to critical mass.

“CIOs must treat AI as an enterprise-grade capability with secure architecture, ethical governance, and outcome-driven observability to ensure responsible deployment and measurable ROI,” says Balasundaram.

I asked technology and business experts for their advice for organizations seeking to avoid risks while rapidly developing, testing, and deploying AI agents. They offered four essential recommendations.

Prioritize based on business value and user experience

Every week, there are new announcements of SaaS platforms adding AI agent capabilities, and startups launching new AI tools. It’s like the early years of the mobile app stores, where employees could download and try multiple apps to solve any problem. This unregulated access quickly led to significant shadow IT and SaaS sprawl.

“Companies should start with a high-impact AI agent—one that addresses a specific, valuable use case where early wins can be measured, refined, and learned from,” says Bob De Caux, CAIO at IFS. “AI agents need to evolve alongside the business, adapting as goals, markets, and customer needs shift, while continuous iteration ensures they remain aligned and effective over time.”

Prioritizing focus areas for AI agent rollouts helps leaders evaluate benefits and develop data, user experience, and risk management strategies.

“Organizations that rush to roll out multiple, siloed AI agents for each business function, such as invoicing, project tracking, and talent management, risk sacrificing the user experience,” says Claus Jepsen, CTO at Unit4. “Instead, businesses should be intentional in building a seamless, intuitive user experience with a single, unified AI agent.”

Recommendation: Be strategic about the business focus and consider where having a unified experience outweighs point solutions.

Prioritize access control and data security

Several experts weighed in on the risk of enabling AI without defining agent roles, reviewing access controls, supporting data confidentiality requirements, and addressing other data security measures.

“Organizations are unknowingly giving AI agents broad access—reading emails, listening to and transcribing teleconference calls, searching sensitive communications—with little oversight or logging, opening the door to data exposure,” says John Paul Cunningham, CISO of Silverfort. “AI agents should be treated like C-suite members; just as a CTO oversees technology and a CFO manages finances, AI Agents should have defined roles, responsibilities, and least-privilege access to prevent potentially catastrophic breaches.”

Experts suggested instituting data practice prerequisites such as data security posture management and data governance non-negotiables.

“One of the biggest risks in deploying genAI agents is feeding them production data without proper governance,” says Jeff Foster, director of technology and innovation at Red Gate, noting that “teams bypass data masking and access controls in the rush to prototype, exposing sensitive information.” He continues, “a secure-by-design approach, where data classification, lineage, and data-masking capabilities are built into the SDLC, helps mitigate these risks before they scale by ensuring sensitive information is properly identified and protected throughout the AI system’s lifecycle.”

While organizations are likely to recognize poor data quality when it shows up on a dashboard or report, these issues are harder to diagnose with AI agents.

“Teams often struggle to ingest data from diverse systems without exposing sensitive information to AI agents,” says Nikhil Girdhar, senior director for data security at Securiti. “Pilots with SaaS-based AI assistants can result in unintended access and accidental exposure of sensitive data. Many organizations also underestimate how much stale or low-quality data degrades AI system response accuracy.”

One more issue to consider is that AI agents feed on unstructured data, including documents and open-ended data stored in SaaS platforms. It is vital to institute proper reviews and controls on these data sources before data scientists and developers enable AI agents to access them.

“As organizations race to integrate genAI agents into their operations, many don’t set proper guardrails for sensitive documents or customer info, leading to inadvertent leaks,” says Elad Schulman, CEO and co-founder of Lasso Security. “Additionally, organizations struggle to apply consistent oversight and response pre-deployment, which can lead to agents being easily manipulated through prompt injections or misused to exfiltrate data. Security teams must rethink traditional models, treating AI agents as dynamic, autonomous entities that must be governed continuously, rather than bolting on protections after deployment.”

Recommendation: Many organizations are developing and communicating their AI governance policies and measuring the success of data governance and security programs to ensure first principles are considered before testing AI agents.

Take a methodical approach to adding data sources

A green light to proceed with developing and integrating AI agents should be perceived as “proceed with caution.” Because AI models are non-deterministic, testing AI models and validating AI agent results is complicated, particularly when data sources are thrown into a model without a disciplined process.

Michael Berthold, CEO of KNIME, says agentic systems need to have access to all information across an organization and should be carefully scaled. “Deploying too fast will result in a lack of control over how tools and information are used and impact the reliability of an agent’s output and—worse—intermediate decisions made by the agents. Avoiding this trap requires a thoughtful scale-up of the environments that the agentic system has access to, gradually adding capabilities via tools or new data sources, and continuously monitoring the quality of the agent’s outputs and decisions.”

Organizations experimenting with agent integrations with protocols like MCP, ACP, and Agent2Agent have added security and data quality considerations.

“There has been an explosion of open-source MCP servers that teams can quickly adopt, but haven’t passed security checks,” says Dr. Priyanka Tembey, co-founder and CTO of Operant AI. “Risks related to AI agents like over-permissioned access and ability to laterally move through an organization’s data assets seem akin to risks traditionally seen in non-AI-based systems, but the unpredictability, dynamism, and black-box nature of AI agents and MCP tools compound these risks and render traditional security controls irrelevant. AI agents add new threat vectors like tool poisoning, tool spoofing, and prompt jailbreaks, which can manipulate AI agents into nefarious actions that can cause data leakage and exfiltration.”

Tembey recommends adopting practices that provide an optimal level of security straight out of the box, including built-in runtime protection and least-privilege access.

Sam Dover, GM of strategic partnerships and market development at Trustwise, says misconfigured MCPs create blind spots when agents invoke incorrect sources. Dover recommends, “Scope toolsets minimally and embed audit hooks for traceable data fetches. Establish a centralized enterprise MCP registry with a centralized catalog that enforces uniform security standards.”

Recommendation: Taking an iterative approach to developing and deploying AI agents supports long-term success. The underlying data, AI models, and business objectives will evolve, and top organizations will develop AI agent lifecycle management as a competency.

Build agents with a QA and operations plan

To establish lifecycle management, organizations will quickly realize the importance of having robust testing and monitoring strategies to ensure valid and accurate AI agents.

“I’ve watched companies sprint to launch AI agents, only to discover they’re amplifying garbage data, missing nuance due to data silos, and hallucinating decisions that erode user trust,” says Chas Ballew, CEO of Conveyor. “Enterprises solving this problem focus on an evaluation baseline: Define an end-to-end process in a specific domain, analyze AI outputs with human review, QA each agent escalation, and build the business case using a north star KPI to know if and how impact occurs.”

Alan Jacobson, CDAO at Alteryx, says, “While the early mistakes around ensuring quality data being input and sensitive data only being used in models with the appropriate protections, most organizations are past that step and are now seeing how the models drift and change over time.”

Jacobson recommends asking the following questions:

  • How will the model change over time?
  • How will you test if the model is “drifting” from its initial success?
  • If the LLM is put into production and used on an ongoing basis, how will you continue to validate it?

Recommendation: Top organizations will develop LLM testing protocols, end-user tools for escalating AI agent issues, and modelops capabilities to detect drifts in accuracy and performance.

AI agents and more autonomous agentic AI are exciting areas for developing business capabilities. However, smart organizations will apply a disciplined approach, especially in selecting business focus areas and implementing best practices in security, data governance, integrations, quality assurance, and operations.

Original Link:https://www.infoworld.com/article/4040513/how-to-avoid-the-risks-of-rapidly-deploying-ai-agents.html
Originally Posted: Tue, 26 Aug 2025 09:00:00 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    How to avoid the risks of rapidly deploying AI agents

Quick Navigation