Airalo WW

Enterprise adoption of Model Context Protocol (MCP) is outpacing the development of security controls, creating a lar…

Wondershare WW

The adoption of Model Context Protocol (MCP) in enterprises is outpacing the development of necessary security controls, creating a significant risk of data breaches and other security threats. According to industry leaders, including Spiros Xanthos, founder and CEO of Resolve AI, and Jon Aniano, SVP of product and CRM applications at Zendesk, the increasing use of autonomous AI agents in enterprise systems is introducing new attack surfaces that traditional security frameworks are not equipped to handle.

The problem is that traditional security frameworks are built around human interactions, and there is not yet an agreed-upon construct for AI agents that have personas and can work autonomously. As a result, AI agents are being given more access and connections to enterprise systems than any other software, making them a bigger attack surface than anything security teams have had to govern before. Aniano noted that MCP servers, which are being increasingly adopted by enterprises to simplify integration between agents, tools, and data, tend to be “extremely permissive” and are “actually probably worse than an API” because they lack the controls that APIs have in place to impose upon agents.

The lack of a defined technical agent-to-agent protocol that all companies agree on is a major challenge, making it difficult to balance user expectations with the need to keep platforms safe. Xanthos warned that if an AI agent is misused, it can result in a data breach or worse. Aniano added that the industry is still trying to figure out how to provide fine-grained access to AI agents, with some existing security tools offering methods to provide access to certain indexes in underlying data stores, but most are broader and human-oriented.

The issue of accountability is also a major concern, with Aniano noting that it can be tricky to determine who is at fault when an AI agent takes an incorrect action. As AI becomes more involved in user interactions, the audit trail can become complex, making it difficult to determine who is responsible. To prevent agents from going off the rails, Zendesk tends to be “very strict” about access and scope, but customers can define their own guardrails based on their needs.

Industry leaders are calling for the development of concrete standards for agent interactions to address these challenges. Aniano noted that the industry must create new methods of safety for deciding what tools these bots can interact with, and Xanthos warned that the fear of something going wrong is what’s holding enterprises back from granting agents more autonomy. However, some companies, including Resolve AI, are experimenting with new AI agents that are “a little more connected to systems” and are working with customers to develop guardrails.

Ultimately, the adoption of MCP and the increasing use of autonomous AI agents in enterprise systems require a new approach to security. As Xanthos noted, “There’s no going back, obviously; this is moving faster than maybe even mobile did. So the question is what do we do about it?” Interim measures, such as fine-grained access controls and declaratively designed API calls, can help mitigate the risks, but a more comprehensive framework for governing AI agents is needed to address the challenges posed by their increasing adoption.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

AliExpress WW
Wondershare WW
Wondershare WW