The enemy within: AI as the attack surface
Boards of directors are pressing for productivity gains from large-language models and AI assistants. Yet the same features that makes AI useful – browsing live websites, remembering user context, and connecting to business apps – also expand the cyber attack surface.
Tenable researchers have published a set of vulnerabilities and attacks under the title “HackedGPT”, showing how indirect prompt injection and related techniques could enable data exfiltration and malware persistence. Some issues have been remediated, while others reportedly remain exploitable at the time of the Tenable disclosure, according to an advisory issued by the company.
Removing the inherent risks from AI assistants’ operations requires governance, controls, and operating methods that treat AI as a user or device, to the extent that the technology should be subject to strict audit and monitoring
The Tenable research shows the failures that can turn AI assistants into security issues. Indirect prompt injection hides instructions in web content that the assistant reads while browsing, instructions that trigger data access the user never intended. Another vector involves the use of a front-end query that seeds malicious instructions.
The business impact is clear, including the need for incident response, legal and regulatory review, and steps taken to reduce reputational harm.
Research already exists that shows assistants can leak personal or sensitive information through injection techniques, and AI vendors and cybersecurity experts have to patch issues as they emerge.
The pattern is familiar to anyone in the technology industry: as features expand, so do failure modes. Treating AI assistants as live, internet-facing applications – not productivity drivers – can improve resilience.
How to govern AI assistants, in practice
1) Establish an AI system registry
Inventory every model, assistant, or agent in use – in public cloud, on-premises, and software-as-a-service, in line with the NIST AI RMF Playbook. Record owner, purpose, capabilities (browsing, API connectors) and data domains accessed. Even without this AI asset list, “shadow agents” can persist with privileges no one tracks. Shadow AI – at one stage encouraged by the likes of Microsoft, who encouraged users to deploy home Copilot licences at work – is a significant threat.
2) Separate identities for humans, services, and agents
Identity and access management conflate user accounts, service accounts, and automation devices. Assistants that access websites, call tools, and write data need distinct identities and be subject to zero-trust policies of least-privilege. Mapping agent-to-agent chains (who asked whom to do what, over which data, and when) is a bare minimum crumb trail that may ensure some degree of accountability. It’s worth noting that agentic AI is susceptible to ‘creative’ output and actions, yet unlike human staff, are not constrained by disciplinary policies.
3) Constrain risky features by context
Make browsing and independent actions taken by AI assistants opt-in per use case. For customer-facing assistants, set short retention times unless there’s a strong reason and a lawful basis otherwise. For internal engineering, use AI assistants but only in segregated projects with strict logging. Apply data-loss-prevention to connector traffic if assistants can reach file stores, messaging, or e-mail. Previous plugin and connector issues demonstrate how integrations increase exposure.
4) Monitor like any internet-facing app
- Capture assistant actions and tool calls as structured logs.
- Alert on anomalies: sudden spikes in browsing to unfamiliar domains; attempts to summarise opaque code blocks; unusual memory-write bursts; or connector access outside policy boundaries.
- Incorporate injection tests into pre-production checks.
5) Build the human muscle
Train developers, cloud engineers, and analysts to recognise injection symptoms. Encourage users to report odd behaviour (e.g., an assistant unexpectedly summarising content from a site they didn’t open). Make it normal to quarantine an assistant, clear memory, and rotate its credentials after suspicious events. The skills gap is real; without upskilling, governance will lag adoption.
Decision points for IT and cloud leaders
| Question | Why it matters |
|---|---|
| Which assistants can browse the web or write data? | Browsing and memory are common injection and persistence paths; constrain per use case. |
| Do agents have distinct identities and auditable delegation? | Prevents “who did what?” gaps when instructions are seeded indirectly. |
| Is there a registry of AI systems with owners, scopes, and retention? | Supports governance, right-sizing of controls, and budget visibility. |
| How are connectors and plugins governed? | Third-party integrations have a history of security issues; apply least privilege and DLP. |
| Do we test for 0-click and 1-click vectors before go-live? | Public research shows both are feasible via crafted links or content. |
| Are vendors patching promptly and publishing fixes? | Feature velocity means new issues will appear; verify responsiveness. |
Risks, cost visibility, and the human factor
- Hidden cost: assistants that browse or retain memory consume compute, storage, and egress in ways finance teams and those monitoring per-cycle Xaas use may not have modelled. A registry and metering reduce surprises.
- Governance gaps: audit and compliance frameworks built for human users won’t automatically capture agent-to-agent delegation. Align controls according to OWASP LLM risks and NIST AI RMF categories.
- Security risk: indirect prompt injection can be invisible to users, passed from media, text or code formatting, as shown by research.
- Skills gap: many teams haven’t yet merged AI/ML and cybersecurity practices. Invest in training that covers assistant threat-modelling and injection testing.
- Evolving posture: expect a cadence of new flaws and fixes. OpenAI’s remediation of a zero-click path in late 2025 is a reminder that vendor posture changes quickly and needs verification.
Bottom line
The lesson for executives is simple: treat AI assistants as powerful, networked applications with their own lifecycle and a propensity for both being the subject of attack and for taking unpredictable action. Put a registry in place, separate identities, constrain risky features by default, log everything meaningful, and rehearse containment.
With these guardrails in place, agentic AI is more likely to deliver measurable efficiency and resilience – without quietly becoming your newest breach vector.
(Image source: “The Enemy Within Unleashed” by aha42 | tehaha is licensed under CC BY-NC 2.0.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post The enemy within: AI as the attack surface appeared first on AI News.
Origianl Creator: AI News
Original Link: https://www.artificialintelligence-news.com/news/tenable-untenable-ai-assistant-attack-threats-what-enterprises-should-do/
Originally Posted: Wed, 05 Nov 2025 14:59:10 +0000













What do you think?
It is nice to know your opinion. Leave a comment.