The problem with AI agent-to-agent communication protocols
You know the routine: The IT industry, driven as much by vendor ambition as by necessity, develops many competing standards to solve a simple problem. Today’s culprit: agent-to-agent communication in AI.
The recent rise of so-called “standards” for how intelligent agents should communicate echoes past issues with service-oriented architecture, web services, and various messaging middleware conflicts. The key difference is that now, this confusion could prevent one of the most promising areas in enterprise technology—agentic AI—from ever providing real value.
Let’s set the scene. Intelligent agents, whether they are specialized large language models (LLMs), service-brokering bots, Internet of Things digital twins, or workflow managers, need to communicate efficiently, securely, and transparently. This is a typical interoperability issue. A well-established industry could, in theory, create a straightforward, practical protocol and move forward. Instead, we see a flood of emerging standards from too many “expert” voices with an underlying agenda, each accompanied by a white paper, a community call, a sponsored conference, and, of course, an ecosystem. This is the core problem.
The alphabet soup of protocols
Let’s look at a cross-section of just some of the technologies that are on offer or are in the works:
- OpenAI’s Function Calling and OpenAI Agent Protocol (OAP) is promoted as a way to enable their models to interact more flexibly with APIs, enhancing prompts with context and coordination logic. There’s talk of standardizing this into the “OAP Standard” but details remain unclear.
- Microsoft’s Semantic Kernel (SK) Extensions are designed to foster agent communication and coordination across various toolkits, including Microsoft’s own Copilot and external agents by using plug-in skills and manifest-driven connectors.
- Meta’s Agent Communication Protocol (Meta-ACP) focuses on graph-based intent resolution, message-passing semantics, and decentralized trust. The pitch: make agents modular and composable at internet scale.
- LangChain Agent Protocol (LCAP) builds on the open source LangChain framework with a focus on interoperability among various agent systems. Their protocol emphasizes chained tool invocation and task-switching, providing compatibility layers with OpenAI and Anthropic models.
- Stanford’s Autogen Protocol supports research-level coordination and negotiation among AI agents, particularly in collaborative planning and negotiation contexts.
- Anthropic’s Claude-Agent Protocol is less of a full-stack protocol and more of a set of message formatting and invocation best practices aimed at aligning with human intent and maintaining context across multi-agent dialogues.
- W3C Multi-Agent Protocol Community Group of the World Wide Web Consortium is proposing universal message types, schemas, and agent discovery mechanisms. They want to make “agents as discoverable as web pages.”
- IBM’s AgentSphere focuses on multi-modal agent communication across hybrid cloud environments, with specifications for policy negotiation and session transfer.
This list isn’t complete. There are dozens more protocols mentioned in Reddit posts, Substack essays, and well-funded stealth startups, each claiming to be the one true answer to multi-agent coordination.
Competition breeds silos
Some will say, “Competition breeds innovation.” That’s the party line. But for anyone who’s run a large IT organization, it means increased integration work, risk, cost, and vendor lock-in—all to achieve what should be the technical equivalent of exchanging a business card.
Let’s not forget history. The 90s saw the rise and fall of CORBA and DCOM, each claiming to be the last word in distributed computing. The 2000s blessed us with WS-* (the asterisk is a wildcard because the number of specs was infinite), most of which are now forgotten. REST and JavaScript Object Notation communication finally won, mostly because they didn’t try too hard—but not before millions of dollars were wasted on false starts and incompatible ecosystems.
The truth: When vendors promote their own communication protocols, they build silos instead of bridges. Agents trained on one protocol can’t interact seamlessly with those speaking another dialect. Businesses end up either locking into one vendor’s standard, writing costly translation layers, or waiting for the market to move on from this round of wheel reinvention.
Multiple standards means no standards
It’s a fundamental principle: producing 20 standards for the same need essentially results in no standards. There is no network effect, only confusion. The time spent debating minor protocol differences, lobbying standards organizations, and launching compatibility initiatives is time not spent creating value or solving end-user business issues.
We in IT love to make simple things complicated. The urge to create a universal, infinitely extensible, plug-and-play protocol is irresistible. But the real-world lesson is that 99% of enterprise agent interaction can be handled with a handful of message types: request, response, notify, error. The rest—trust negotiation, context passing, and the inevitable “unknown unknowns”—can be managed incrementally, so long as the basic messaging is interoperable.
Let’s be honest. Most of the churn around standards is more about gaining mindshare and securing business development budgets than solving architecture issues. Announcing a standard protocol aims to foster an ecosystem rather than achieve consensus. Everyone aspires to be the TCP/IP of AI agents, but history shows that protocol dominance is mainly achieved through grassroots adoption rather than white papers or marketing efforts.
Go for the minimum
Here’s an unpopular truth: the industry would be best served by collectively deciding on the minimum viable protocol and iterating from there. Something as dead simple as HTTP+JSON with common schemas would meet 80% of use cases, with optional extensions as needs emerge. Today we have a Tower of Babel: overcomplex schemes, edge-case features no one will use, and competing vendor alliances.
Business leaders and architects should resist jumping on every protocol bandwagon. Demand interoperability, evaluate whether a “standard” actually solves a real pain point, and when in doubt, build abstraction layers that prevent lock-in.
We urgently need open protocols for AI agent communication. Too many competing standards render them all essentially meaningless. The IT industry has gone through this cycle before. Unless we break free from it, agentic AI will just be another example of wasted time and effort. Let’s not allow protocol vanity to get in the way of creating real business value.
Original Link:https://www.infoworld.com/article/4033863/the-problem-with-ai-agent-to-agent-communication-protocols.html
Originally Posted: Tue, 05 Aug 2025 09:00:00 +0000
What do you think?
It is nice to know your opinion. Leave a comment.