Market Perspectives

ISG Buyers Guide for Agentic and Generative AI in 2025 Classifies and Rates Software Providers

Written by ISG Software Research | Jul 31, 2025 12:00:00 PM

ISG Research is happy to share insights gleaned from our latest Buyers Guide, an assessment of how well software providers’ offerings meet buyers’ requirements. The Agentic and Generative AI: ISG Research Buyers Guide is the distillation of a year of market and product research by ISG Research.

As artificial intelligence (AI) continues to evolve, there is an increasing need for it to move beyond merely providing scores or recommendations to going further and taking actionable steps. While scores and recommendations offer valuable insights, they often require decision-makers to translate the data into effective strategies. As enterprises invest in generative AI (GenAI), they recognize this need. ISG Market Lens Research shows one-third of enterprises (32%) are using the technology to address business process workflow management, representing the third-biggest initiative to date based on investment. And it is second on the list of GenAI use cases that will deliver the most benefit over the next two years.

ISG Research defines GenAI as the ability to generate new, seemingly real content, including text, documents, images and other types of media. GenAI is often used as the technology behind chatbots. It is also used to generate code and to summarize documents. The “generative” aspect of GenAI stems from the fact that it creates new material based on instructions or prompts from the users. GenAI uses large language models (LLMs) to generate new material by predicting the next element of the response, whether that element is the next word of a reply, the next line of code in a software program or the next pixel in an image or video.

Because GenAI can generate responses to user prompts, it gives the appearance—especially when used in a chatbot—that, like a customer service agent, it can take other types of actions. A related set of capabilities called agentic AI has emerged to address this need. ISG Research defines agentic AI as the ability to take autonomous actions, involving multiple processes or systems, based on an understanding of the environment and the goals that should be achieved. These two sets of capabilities are closely related, with software providers combining generative and agentic AI in their offerings.

GenAI has sparked universal interest in AI across all industries, sizes of enterprises and among consumers. GenAI is finally delivering on the promise of natural language processing (NLP), making it easier to interact with computer systems and other technology. However, it is not without its flaws. Like all predictive technologies, it’s not 100% accurate. For a variety of reasons, such as a lack of information or inaccurate information used to create the LLMs, the responses may not be correct. When inaccurate material responses are generated, it is referred to as a hallucination. If the training material is biased or offensive, the responses may be biased or offensive.

In order to increase the accuracy of responses and reduce hallucinations, enterprises started fine-tuning LLMs and using retrieval augmented generation (RAG). Fine-tuning a model is the process of taking a pre-trained model and training it further on a specific data set or domain. RAG is the process of augmenting the prompts with additional information that was not used in the training process, such as internal documents or data. RAG also depends on vector processing to determine similarity between text submitted in a prompt and documents stored in a knowledge base in order to retrieve the most contextually relevant information. In addition, the process of constructing prompts that generated the best responses evolved into a discipline known as prompt engineering.

At the same time enterprises were trying to increase the accuracy of responses, they were also starting to question the costs associated with GenAI usage. While larger and larger models provided more accurate results and a wider variety of use cases, they were also driving up the costs of responding to user requests. Increasing model sizes also drive up the use of expensive GPU processing. As a result, enterprises began to explore the use of targeted, smaller models. The proliferation of models and model types led to the need to create and manage multiple models with model gardens or model catalogs. Now it is common for GenAI platforms to support and manage a variety of models depending on use cases. However, through 2027, due to a lack of tooling and governance, controlling the costs for GenAI deployments will remain a concern for one-third of enterprises, limiting their deployments and ROI.

GenAI markets also took a page from the traditional AI and machine learning (ML) market. The notion of machine learning operations (MLOps) was well established when GenAI exploded onto the scene. MLOps applies discipline to the process of developing and deploying ML models into production in order to provide repeatability, monitoring and governance. These same notions have been applied to the development and deployment of LLMs and are referred to as LLM operations or LLMOps. However, the market is still evolving.

The most recent changes in the GenAI market, which is still evolving rapidly, are based around the concept of agentic AI. As noted above, enterprises recognize the need to apply GenAI to business processes and workflow management. LLMs, the foundation models used to generate text, images and videos, are not necessarily the right models to generate actions. Foundation models must be trained on a set of actions and outcomes to effectively generate the right set of actions to achieve the desired goal. In some cases, large action models (LAMs) are being used in agentic AI processes, and in other cases, LLMs are being extended to incorporate actions and outcomes in the training data set.

As a result, modeling tools and evaluation tools need to be extended to support agents and their associated actions. Agents also need to be able to perceive their environment and perform reasoning in order to match actions to goals. They must be able to support a variety of software applications, including pre-built and custom ones. And, unlike GenAI which is driven by a prompt, agents need to be able to execute autonomously.

To effectively apply agentic and GenAI in their organizations, enterprises need tooling and processes to support each of the needs outlined above. They must be able to develop, fine-tune, deploy and monitor models. They must be able to execute prompts and augment the responses to prevent hallucinations and inaccuracies. They must be able to incorporate agentic and GenAI into their business processes in the form of chatbots, assistants or autonomous agents. They must also be able to optimize the costs of using these systems to match their budgets.

Fortunately, the widespread interest and demand for agentic and GenAI has driven software providers to invest heavily in meeting these needs. Providers are racing to gather market share before it is too late; as a result, it is a very competitive market. However, it is also an immature and rapidly changing market. Many of the capabilities needed are still under development or in various stages of pre-release. The providers with more of these capabilities released and supported will be better positioned to meet today’s enterprise needs.

The ISG Buyers Guide™ for Agentic and Generative AI evaluates software providers and products in the following key areas: agentic AI, GenAI, preparation of data used in AI processes, support for optimizing model execution, developer tooling and LLM operations.

This research evaluates the following software providers that offer products that address key elements of agentic and GenAI as we define it: Alibaba Cloud, Altair, Anthropic, Automation Anywhere, AWS, C3 AI, Cohere, Databricks, Dataiku, DataRobot, Domino Data Lab, Google Cloud, H2O.ai, Hugging Face, IBM, Microsoft, NVIDIA, OpenAI, Oracle, Palantir, Quantexa, Salesforce, SAP, ServiceNow, Snowflake, Teradata, UiPath and Weights & Biases.

This research-based index evaluates the full business and information technology value of agentic and generative AI software offerings. We encourage you to learn more about our Buyers Guide and its effectiveness as a provider selection and RFI/RFP tool.

We urge organizations to do a thorough job of evaluating agentic and generative AI offerings in this Buyers Guide as both the results of our in-depth analysis of these software providers and as an evaluation methodology. The Buyers Guide can be used to evaluate existing suppliers, plus provides evaluation criteria for new projects. Using it can shorten the cycle time for an RFP and the definition of an RFI.

The Buyers Guide for Agentic and Generative AI in 2025 finds Google Cloud first on the list, followed by Oracle and IBM.

Software providers that rated in the top three of any category ﹘ including the product and customer experience dimensions ﹘ earn the designation of Leader.

The Leaders in Product Experience are:

  • Google Cloud.
  • IBM.
  • Oracle.

The Leaders in Customer Experience are:

  • Databricks.
  • Oracle.
  • Google Cloud.

The Leaders across any of the seven categories are:

  • Oracle, which has achieved this rating in six of the seven categories.
  • Databricks and Google Cloud in four categories.
  • Microsoft in three categories.
  • AWS, Hugging Face, IBM and Teradata in one category.

The overall performance chart provides a visual representation of how providers rate across product and customer experience. Software providers with products scoring higher in a weighted rating of the five product experience categories place farther to the right. The combination of ratings for the two customer experience categories determines their placement on the vertical axis. As a result, providers that place closer to the upper-right are “exemplary” and rated higher than those closer to the lower-left and identified as providers of “merit.” Software providers that excelled at customer experience over product experience have an “assurance” rating, and those excelling instead in product experience have an “innovative” rating.

Note that close provider scores should not be taken to imply that the packages evaluated are functionally identical or equally well-suited for use by every enterprise or process. Although there is a high degree of commonality in how organizations handle agentic and generative AI, there are many idiosyncrasies and differences that can make one provider’s offering a better fit than another.

ISG Research has made every effort to encompass in this Buyers Guide the overall product and customer experience from our agentic and generative AI blueprint, which we believe reflects what a well-crafted RFP should contain. Even so, there may be additional areas that affect which software provider and products best fit an enterprise’s particular requirements. Therefore, while this research is complete as it stands, utilizing it in your own organizational context is critical to ensure that products deliver the highest level of support for your projects.

You can find more details on our community as well as on our expertise in the research for this Buyers Guide.