ISG Software Research Analyst Perspectives

Google Next Highlights AI Investments

Written by David Menninger | Jun 3, 2025 10:00:00 AM

The artificial intelligence (AI) landscape is undergoing dramatic transformation, with enterprises rapidly adapting to a world where AI is no longer just a possibility but a necessity. ISG Market Lens research of 300 enterprises shows that AI initiatives were the second largest category of IT spending for 2024, behind customer experience initiatives. Furthermore, enterprises plan to increase 2025 IT spending on AI initiatives by 5.7%, which is more than any other category and nearly three times the 2.7% average planned increase in IT spending.

Faced with the challenge of realizing value in AI investments, enterprises are shifting efforts from pilot programs to full-scale implementations. Participants report that 42% are moving toward or fully in production, with another 43% running live pilots or trials. In this context, it was no surprise to see an emphasis on AI at the recent Google Cloud Next ’25 event in Las Vegas.

Google Cloud, a subsidiary of Alphabet Inc., is a cloud service provider offering infrastructure as a service and a portfolio of cloud AI, analytic and data services applications. This Analyst Perspective focuses on Google Cloud’s AI-related announcements, and Matt Aslett covers the data- and analytics-related announcements. Under the umbrella of AI, Google placed heavy emphasis on agentic AI and multimodal generative AI capabilities. As I’ve described previously, we define agentic AI as the ability to take autonomous actions involving multiple processes or systems based on an understanding of the environment and the goals that should be achieved.

Google announced several new agentic AI capabilities at Next ’25, including Agentspace, Agent Builder enhancements and a dedicated AI Agent Marketplace section within the Google Cloud Marketplace. Agentspace provides tools to find, use and even create agents. It includes three Google-developed agents. The Deep Research agent is described as a “personal research assistant” that browses the web to assemble information into reports, including a podcast-style audio overview. As the name implies, the Idea Generation agent (in preview) can generate ideas across domains such as marketing and product development, rank those ideas and gather input from users to refine the criteria for evaluating the ideas. The NotebookLM Plus agent is an extension of the NotebookLM collaboration tool for collecting, summarizing and sharing sets of information. Developers can create custom agents or bring in agents developed on other platforms. 

Vertex AI Agent Builder is a low-code/no-code interface for creating custom agents. To supplement these capabilities, the company announced an open-source Agent Development Kit for building agents and Agent Engine, a runtime for developing, testing and deploying custom agents. Google also announced an open protocol called Agent2Agent (A2A) along with 60 partners, including large software providers and global service providers. A2A is a protocol for communication among and between agents and is intended to complement Anthropic’s Model Context Protocol (MCP). Agent-to-agent communication is critical for the successful creation and deployment of multi-agent systems. The company acknowledges there is still more work ahead, particularly in the area of agent governance and agent ops. Still, these enhancements show significant progress in the past year.

Google developed foundational models for various purposes as part of its AI offerings. The Gemini 2.5 family of models was announced before the Next ’25 event. These models include thinking and reasoning capabilities critical for enabling agentic AI workflows. Gemini supports multimodality, allowing input of images, audio, video, code repositories and text prompts. The context window has also been expanded to accommodate larger files associated with multimodal input.

In addition to the multimodal capabilities in Gemini, Google touted other models, including Veo 2 for video editing and creation, Lyria for generating music from a text prompt, Chirp 3 for text to speech and Imagen 3 for image generation and enhancement. While these multimodal capabilities may target specific use cases, ISG Research finds that 42% of enterprises have visual generative workloads, 30% video, 24% audio and 5% music. As a demonstration of its multimodal capabilities, Google shared clips of its work with the Sphere, Warner Brothers and Magnopus to take the original Wizard of Oz movie and expand and enhance it to 16K resolution. It’s scheduled to premiere on August 28th at the Sphere.

Google made several announcements regarding AI infrastructure, including Ironwood, the seventh-generation tensor processing unit optimized for AI workloads. Google continues to partner with NVIDIA to bring Gemini to NVIDIA Blackwell systems and will support NVIDIA Vera Rubin GPUs when available. Google is also bringing various parts of its AI portfolio, including Gemini and Agentspace, to Google Distributed Cloud which provides on-premises deployments of Google Cloud services. 

Google was rated as an Exemplary provider in our 2024 AI Platform Buyers Guide. Google Cloud Next ‘25 underscored the central role of AI in the changing technology landscape. The company made a point to emphasize its 600+ customer references and case studies. The innovations unveiled—spanning various facets from multimodal capabilities to AI agents—demonstrated how enterprises can harness the power of AI to foster creativity, improve efficiencies and engage audiences in new ways. As enterprises navigate this transformative era, I recommend assessing Google Cloud's offerings for consideration in the AI architecture.

Regards,

David Menninger