ISG Software Research Analyst Perspectives

Why Data Software Providers Are Embracing Model Context Protocol

Written by Matt Aslett | Nov 19, 2025 11:00:00 AM

I have written several times this year about Model Context Protocol (MCP) and its importance in enabling agentic artificial intelligence (AI) use cases. Numerous data platform, data management, data operations and real-time data software providers have added support for MCP to their products in recent months. MCP has become so ubiquitous, in fact, that it is easy to forget the protocol was only introduced in November 2024. Given the breadth of support we are seeing for MCP among software providers, it is worth dedicating some time to explain its origin and purpose, as well as assessing its strengths and weaknesses, to evaluate its significance in the future evolution of agentic AI.

Context is everything, as the saying goes. This is certainly true for generative AI (GenAI) and agentic AI. As enterprises began to evaluate the potential for large language models (LLMs) following the emergence of foundational GenAI, it quickly became clear that establishing trust in their output would be essential to enterprise adoption and that the ability to augment realistic content generated by GenAI with real-world context from enterprise content and data would be a critical requirement. Many data software providers adopted capabilities such as vector search and retrieval-augmented generation to combine foundation models with enterprise information. As I recently explained, these capabilities have become commonplace, with 90% of providers assessed in the ISG 2025 Buyers Guide for Operational Data Platforms grading A- or above for vector model support and 80% grading A- or above for RAG support.

Almost three-quarters (70%) of participants in ISG’s Data and AI Programs Study stated that the inability to expose or share data was a common disruption to data initiatives. RAG is ideal for retrieving static unstructured data via custom implementations that connect specific models with specific data. While custom implementations have a role to play in agentic AI, the need to support automated execution of business and data processes requires a more expansive and standardized approach to integration. Agentic AI is software designed to execute business processes through autonomous actions, potentially controlling multiple processes and systems through the orchestration of one or more AI or algorithmically determined rules-based models, based on an understanding of the environment and the goals that should be achieved. Providing a standardized approach to connecting agents and AI models with the data sources and tools that provide that understanding was a prime motivation behind the development of MCP.

MCP was announced by Anthropic in November 2024 to enable LLMs, agents and applications to communicate with data platforms, file systems and development and productivity tools. MCP has a simple client/server architecture in which MCP Hosts (AI models and applications) utilize one or more MCP Clients to establish a one-to-one connection with an external data source or tool via an MCP Server implementation. In addition to the specification and software development kits for a variety of languages, Anthropic also delivered support for MCP in its own Claude applications, as well as a repository of MCP servers, including reference implementations and those developed and published by other software providers. While adoption began with Anthropic, MCP has quickly established itself as a key open standard. Other AI model providers such as OpenAI, Google, Amazon Web Services and IBM have added MCP Client support and numerous data platform and tools providers are adding MCP Server support. At the time of writing, the repository of third-party MCP servers includes approximately 450 implementations, highlighting the pace at which MCP has been adopted by software providers. That pace of adoption can be expected to continue. I assert that through 2027, almost all data-related software providers will adopt MCP to provide interoperability between agentic applications and trusted enterprise data and business workflows.

It is important to note that MCP is not the only protocol in town. MCP enables vertical integration between agents and underlying data sources and tools. It does not facilitate horizontal integration between agents. This is the rationale for standards such as the Agent2Agent (A2A) Protocol, the Agent Network Protocol and the Agora Protocol, which my colleague David Menninger will be exploring in more detail in a forthcoming Analyst Perspective. In addition to general-purpose agentic communication protocols, MCP is also likely to be used in conjunction with function-specific protocols such as Agentic Commerce Protocol and Agent–User Interaction (AG-UI) Protocol. Of course, MCP is not the only approach to integrating enterprise software. It can be seen as the latest in a long line of protocols and approaches that include EDI, SOAP, REST, JSON and APIs. MCP and other agentic protocols will need to coexist with these existing protocols and approaches to provide integration with established enterprise systems. Enterprises should also be aware that MCP comes with significant security considerations. Any software that enables access to and interaction with enterprise data, as well as automated code execution, should be treated with caution, and it is fair to say that capabilities and best practices for securing MCP are a work in progress. Key considerations include authentication and authorization, as well as guarding against command, prompt and tool injection vulnerabilities. Enterprises should proceed with caution and be sure to ask platform and tool providers implementing MCP what steps they have taken to mitigate these potential risks.

The reason for MCP’s rapid adoption by data-related software providers is simple: it provides a standardized approach to ensure their products, and related enterprise datasets, are available to multiple agentic AI models and applications. The reason for enterprise interest in MCP is also simple: it provides a standardized approach to ensure that AI systems are aware of their data platforms and tools, facilitates the sharing of contextual information with those AI systems, and enables developers to build composable integrations and workflows that take advantage of that contextual information. I recommend that enterprises incorporate support for MCP (and security considerations) into their criteria for assessing potential data-related software providers.

Regards,

Matt Aslett