Google is packaging its data and analytics tools under a new umbrella called the Agentic Data Cloud, an architecture designed to help enterprises move AI projects from experimentation to production. The idea is to turn scattered enterprise data into a shared semantic layer that AI agents can understand, reason over, and act on more reliably.
At the center of the strategy are Google’s existing platforms, including BigQuery, Dataplex, and Vertex AI. Google is now positioning them as parts of a broader intelligence layer that combines metadata, governance, and interoperability across cloud and third-party systems.
Knowledge Catalog Becomes the Core
The foundation of the new approach is Knowledge Catalog, the next step in the evolution of Dataplex Universal Catalog. Google says it extends metadata management into a semantic layer that maps business meaning and relationships across data sources, helping agents understand not just where data lives but what it means.
Knowledge Catalog also supports third-party catalogs and major enterprise apps such as Salesforce, Palantir, Workday, SAP, and ServiceNow. Enterprises can bring third-party data into Google’s lakehouse, where it is automatically mapped into the catalog’s context model.
Adding Business Context
Google is also adding tools to capture business logic more directly inside its own environment. One preview feature is a LookML-based agent that can infer semantics from documentation, while another BigQuery preview lets enterprises embed business rules for faster analysis.
The catalog is designed to keep learning over time. Google says it can profile structured datasets, tag unstructured content in Cloud Storage, and even infer missing schema or relationships using Gemini models. That means the platform is not just cataloging data, but actively enriching it with context as it is used across the organization.
Why Semantics Matter
Analysts say this semantic layer is where the real competition is heading. Futurum Group analyst Dion Hinchcliffe described inconsistent meaning as one of the hardest enterprise AI problems, since agents cannot reason well if different systems define the same business term in different ways.
That is why Google’s rivals are moving in a similar direction. Microsoft is pushing Fabric IQ, while AWS is developing Nova Forge, both aimed at adding semantic context on top of enterprise data so AI systems can be more reliable and easier to operate at scale.
What This Means for Enterprises
Google’s message is that AI agents need more than access to raw data — they need trusted context, governance, and business meaning. By combining cataloging, semantic enrichment, and cross-cloud interoperability, the company is trying to make its data stack the default foundation for agentic AI.
The broader race is no longer just about storage or analytics. It is about which cloud can turn enterprise data into a usable reasoning layer for AI agents first
Google Rebrands Its Data Stack for Agentic AI
Google is packaging its data and analytics tools under a new umbrella called the Agentic Data Cloud, an architecture designed to help enterprises move AI projects from experimentation to production. The idea is to turn scattered enterprise data into a shared semantic layer that AI agents can understand, reason over, and act on more reliably.
At the center of the strategy are Google’s existing platforms, including BigQuery, Dataplex, and Vertex AI. Google is now positioning them as parts of a broader intelligence layer that combines metadata, governance, and interoperability across cloud and third-party systems.
Knowledge Catalog Becomes the Core
The foundation of the new approach is Knowledge Catalog, the next step in the evolution of Dataplex Universal Catalog. Google says it extends metadata management into a semantic layer that maps business meaning and relationships across data sources, helping agents understand not just where data lives but what it means.
Knowledge Catalog also supports third-party catalogs and major enterprise apps such as Salesforce, Palantir, Workday, SAP, and ServiceNow. Enterprises can bring third-party data into Google’s lakehouse, where it is automatically mapped into the catalog’s context model.
Adding Business Context
Google is also adding tools to capture business logic more directly inside its own environment. One preview feature is a LookML-based agent that can infer semantics from documentation, while another BigQuery preview lets enterprises embed business rules for faster analysis.
The catalog is designed to keep learning over time. Google says it can profile structured datasets, tag unstructured content in Cloud Storage, and even infer missing schema or relationships using Gemini models. That means the platform is not just cataloging data, but actively enriching it with context as it is used across the organization.
Why Semantics Matter
Analysts say this semantic layer is where the real competition is heading. Futurum Group analyst Dion Hinchcliffe described inconsistent meaning as one of the hardest enterprise AI problems, since agents cannot reason well if different systems define the same business term in different ways.
That is why Google’s rivals are moving in a similar direction. Microsoft is pushing Fabric IQ, while AWS is developing Nova Forge, both aimed at adding semantic context on top of enterprise data so AI systems can be more reliable and easier to operate at scale.
What This Means for Enterprises
Google’s message is that AI agents need more than access to raw data — they need trusted context, governance, and business meaning. By combining cataloging, semantic enrichment, and cross-cloud interoperability, the company is trying to make its data stack the default foundation for agentic AI.
The broader race is no longer just about storage or analytics. It is about which cloud can turn enterprise data into a usable reasoning layer for AI agents first
Archives
Categories
Archives
Google Rebrands Its Data Stack for Agentic AI
April 28, 2026Anthropic Launches Claude Managed Agents for Production AI Workflows
April 16, 2026Categories
Meta