Table of Contents
Since Google announced the release of A2A (Agent2Agent Protocol), there’s been a lot of discussion around how open protocols like A2A and Anthropic’s MCP (Model Context Protocol) are enabling agentic systems.
In this post, we’ll explore how MCP is currently being used at PolyAI and how it lays the foundation for broader use cases—particularly in conjunction with the A2A Protocol—both internally and externally.
What are MCP and A2A Protocol?
Before we dive in, let’s revisit the two key concepts at the core of this post:
- MCP (Model Context Protocol): An open protocol that standardizes how applications provide context to large language models (LLMs). It offers a consistent way to connect AI models to various data sources and tools like CRMs, CDPs, booking platforms and other systems an enterprise might use.
- A2A (Agent2Agent Protocol): A protocol designed to enable collaboration between multiple autonomous agents, allowing them to communicate, delegate tasks, and pursue shared goals in a coordinated way.
Why is MCP important for agentic AI in the enterprise?
At PolyAI, delivering highly engaging customer interactions relies heavily on rich contextual knowledge. This context-driven intelligence is at the heart of our AI agents. The more data our agents can access, reason over, and apply, the better they can provide world-class service to every caller.
A standardized protocol like MCP enables our agents to interact seamlessly with both internal and external data sources and tools. This not only accelerates our ability to scale but also lowers development costs when supporting a broader range of third-party integrations.
PolyAI already offers native integrations with platforms like Salesforce, Zendesk, Snapcall, Google Drive, and OpenTable. By adopting MCP, we can expand this ecosystem more efficiently and securely—delivering even greater convenience and value to our clients.
MCP also acts as a structured interface between our conversation engine and the wider service and data layers. For customers and partners, this translates into smarter, faster, and more personalized interactions.
How to leverage MCP for effective AI agents
Here’s a few of the ways in which we use MCP to deliver more engaging customer experiences and support complex transactions using multiple tools.
Contextual enhancements to conversations
Almost everyone has experienced the frustration of calling customer service to file a complaint or negotiate a better mobile plan, only for the call to drop after 15 minutes. When you call back, you’re forced to repeat everything—from identity verification to explaining your issue—all over again.
One of the key traits that separates a good customer service agent from an exceptional one is their ability to leverage context without requiring the customer to re-explain it. At PolyAI, we’ve introduced contextual enhancements to our conversations to address exactly this problem.
Our AI agents can access both internal and external data sources during a call to determine if the caller is a returning user (after ID verification). Once recognized, the agent can retrieve relevant context immediately, avoiding repetitive questions and providing a seamless experience. This contextual data isn’t always stored in a single place—it can be scattered across multiple sources like call_sentiment, call_metrics, call_context, and call_summary.
To streamline this, we’ve built an MCP server that abstracts these complexities. It acts as a unified interface, allowing our agent to fetch context dynamically as needed. By using MCP to enrich conversations with real-time, context-aware information, our agents can reference past interactions and outcomes—delivering a more fluid, intelligent, and personalized experience.
Better connections to backend systems
At PolyAI, we’ve built an MCP server that abstracts away the complexities of interacting with various cloud storage systems such as Redis, SQL, and NoSQL. Our AI agents can read from and write to these storage systems through MCP, which significantly reduces the overhead of managing differences in storage technologies and schema rules.
By introducing a data-layer proxy through the MCP server, our AI agent becomes far more adaptable for future third-party integrations. For example, if a client hosts a Jira-compatible MCP interface, our agent could easily integrate with it to create tickets—without custom development work.
Enabling multi-agent collaboration with MCP + A2A
Here at PolyAI, we’re leveraging MCP in combination with the A2A Protocol to enable enterprises to create a robust ecosystem of AI agents to not only answer calls, but to handle other repetitive customer service tasks.
While customer-facing AI agents handle complex customer service agents in real time, business intelligence agents take care of post-call activities like generating analytics insights, rating calls and detecting trends.
These agents coordinate with each other using A2A, while MCP provides the structured messaging layer needed for smooth communication—both internally and with external systems.
We’re also building agentic auto-review and self-improvement workflows, where agents can autonomously review conversations and iteratively refine their own performance. (More on this soon.)
What’s next?
Our journey with MCP is just beginning. We’re actively investing in building a flexible, extensible platform that allows partners and clients to integrate their own MCP servers and clients into the PolyAI ecosystem. This opens the door for external teams to create rich, context-aware, and fully integrated customer service agents on top of our platform.
And there’s more to come—we’re bringing MCP to Agent Studio, making it even easier to orchestrate agents with your favorite SaaS tools. Stay tuned for updates!