Inside the Google Cloud Next ’26 keynote and the blueprint for the agentic enterprise

Inside the Google Cloud Next ’26 keynote and the blueprint for the agentic enterprise

Google Cloud did not use its opening keynote to argue that AI is useful. That fight is over. The whole presentation was built around a harder claim: enterprise AI is leaving the demo phase, and the companies that matter now need architecture, governance, identity, data plumbing, and security that can survive scale. The keynote kept returning to the same point in different language. A good model is not enough. A good assistant is not enough. A good proof of concept is not enough. Google wants to sell the stack that sits under all of it. The official session description framed the keynote as a blueprint for the “agentic enterprise,” and Thomas Kurian’s adapted keynote transcript on Google Cloud’s own blog uses the same language.

That made the event more revealing than a standard product showcase. Google was not chasing applause line by line. It was trying to reset the terms of the enterprise AI conversation. The keywords were everywhere: build, scale, govern, optimize, long-running agents, trusted context, machine-speed defense, cross-cloud data, persistent memory, auditability. Those are not the words of a consumer AI keynote. They are the vocabulary of CIOs, platform teams, data leaders, security chiefs, and procurement offices that have already tested AI and now have to decide whether it deserves a permanent place in the operating model of the company. Google’s broader Next ’26 materials make that explicit, presenting the event as a roadmap from AI adoption toward large-scale business change.

The chapter timing of the keynote matters because the pacing itself tells the story. The pre-show plays like a creative flex. Then the presentation hardens into infrastructure, data, security, and operational control. By the end, Google has moved from showmanship to an argument about enterprise systems. That arc was deliberate. It tells you what Google thinks buyers are ready to hear in 2026: not “look what the model can say,” but “look what the enterprise can trust.”

The room before the message

The pre-show looked almost playful. Live music, hand-controlled visuals, and talk of Music AI Sandbox and Gemini-generated visuals gave the room a loose, festival-like feel. It was a familiar conference move: loosen the crowd up, remind people that Google still wants to be seen as inventive, and make AI feel tactile before the executive layer takes over. That opening matters because it sets up a contrast the keynote then uses for the rest of its runtime. The surface is expressive. The sales argument is industrial.

That contrast says a lot about Google’s AI identity problem. Google wants to appear imaginative without looking unserious. It wants the room to feel the energy of generative AI while also believing that Google Cloud is the adult in the market, the vendor that can carry a messy enterprise from experimentation to production. The pre-show let Google nod to culture and creation. The keynote itself was about control.

There is a small but useful lesson in that staging choice. Google understands that enterprise AI still has a persuasion problem. Plenty of executives have seen dazzling model demos. Plenty have also watched internal pilots stall, budgets get frozen, compliance teams get nervous, and users drift back to old workflows. So the keynote had to do two jobs at once. It had to keep AI exciting enough to justify continued spending, and sober enough to reassure buyers that Google’s answer is not just more novelty.

The keynote’s real thesis

Once Thomas Kurian takes the stage, the keynote stops being a conference opener and becomes a framing exercise. His main argument is simple and sharp: the pilot phase is over, and the next challenge is moving AI across the whole enterprise rather than one isolated workflow. Google’s official session page says the keynote is about guiding companies from AI adoption to large-scale transformation. Kurian’s adapted transcript says the answer is “a unified stack,” not a patchwork of disconnected silicon, models, data, and apps.

That idea sounds obvious, but it is doing real commercial work. Google is pushing back against the way many companies bought AI in 2023, 2024, and even 2025: one model here, one coding tool there, one chatbot trial for customer service, one document assistant for internal search. Those purchases created activity, not architecture. Google’s keynote says the era of scattered AI tools is ending, and the replacement is a platform story in which infrastructure, data, security, application logic, and agent orchestration are supposed to fit together from day one. Google’s own Next ’26 overview and the product launch for Gemini Enterprise Agent Platform both reinforce that claim.

The deeper move is rhetorical. Google is trying to redefine the market around a problem that favors Google. If enterprise AI is mostly a model contest, buyers can compare model vendors. If enterprise AI is a full-stack systems contest, Google can point to TPUs, networking, storage, BigQuery, Workspace, Chrome, Gemini, security, and its global cloud footprint. That is a much stronger position.

The keynote’s stack in one glance

Google’s layerWhat the keynote attached to it
Training and inferenceTPU 8t, TPU 8i, Virgo Network, Managed Lustre, NVIDIA Vera Rubin support
Control and deliveryGemini Enterprise Agent Platform, Gemini Enterprise app, Workspace Intelligence, security and governance layers

That two-part split is the cleanest way to read the keynote. One half is about moving and reasoning over massive amounts of compute and data. The other half is about getting that intelligence into everyday work without losing control. Nearly every announcement on stage fits one side or the other.

Google as customer zero

Sundar Pichai’s cameo serves a familiar purpose. He turns product positioning into corporate validation. Google has used the “customer zero” line for years, but here it had extra weight because the keynote needed proof that agentic workflows are not just selling points. Pichai said Google uses AI internally across coding, marketing, and security. In the official Google recap, he wrote that 75% of new code at Google is AI-generated and approved by engineers, up from 50% the previous fall, and that a complex migration done by agents and engineers together was completed six times faster than it had been a year earlier. He also tied Google’s Cloud Next story to much larger infrastructure spending, while Reuters separately reported Alphabet’s 2026 capex plan at $175 billion to $185 billion.

Those numbers are not just boasting. They are part of the sales logic. Google is telling enterprise buyers: we are not asking you to trust a theory we do not trust ourselves. We run these systems internally. We hit the same limits you do. We spend at a scale that forces us to solve them. That matters because enterprise buyers tend to discount cloud keynote promises unless the vendor can show serious internal adoption.

There is another layer here. When Pichai says Google’s security operations center agents triage tens of thousands of threat reports each month and cut threat mitigation time by more than 90%, he is not simply praising Google’s tools. He is drawing a line between ordinary automation and high-pressure operating environments. Security, code migration, and global marketing are messy. They involve policy, handoffs, deadlines, exceptions, and reputational risk. If Google says agents are useful there, it is trying to move the conversation away from low-stakes prompt demos and toward the workflows that actually move budgets.

That is also where the keynote starts to sound less like a software launch and more like a management theory. Google is saying the real question is no longer whether an agent can be built. The real question is whether an organization can supervise thousands of them without losing traceability, context, or control. That is a real enterprise problem. It is also the problem Google chose because it leads directly into its biggest launch of the day.

A platform built for agents, not prompts

The biggest announcement was Gemini Enterprise Agent Platform, which Google describes as the evolution of Vertex AI into a single place to build, scale, govern, and optimize agents. The launch blog and Thomas Kurian’s keynote transcript are unusually clear about the pitch. This is not framed as a chatbot builder. It is framed as infrastructure for production-grade agent systems, with Agent Studio, Agent Runtime, Agent Identity, Agent Registry, Agent Gateway, Agent Evaluation, Agent Observability, and related controls bundled into one product story.

That matters because the enterprise agent market has been full of loose language. Plenty of vendors call almost anything an agent. Google tried to tighten the definition by anchoring the platform around lifecycle management. An enterprise agent, in Google’s version, is something that has identity, policy boundaries, memory, routing, evaluation, traceability, and a path to production. That is a better definition than most keynote language gives you, and it explains why Google spent so much time on registries, gateways, and observability instead of flashy conversation examples.

The model story underneath the platform was also aggressive. Google said the platform provides access to Gemini 3.1 Pro, Gemini 3.1 Flash Image, Lyria 3, and Anthropic models including Claude Opus, Sonnet, and Haiku, with Claude Opus 4.7 support added at launch. That open-model posture is important because Google kept trying to separate itself from closed ecosystems. The keynote’s repeated theme of “freedom” was not accidental. Google wants to be seen as the vendor with the integrated stack and the least ideological lock-in.

The platform story gets even more interesting when you look at the governance features. Agent Identity gives each agent a verifiable cryptographic identity and auditable authorization trail. Agent Registry creates a central record of approved assets. Agent Gateway inspects and governs traffic across agent-to-agent and agent-to-tool connections, including protocols such as MCP and A2A. That is the kind of plumbing most enterprise buyers have been missing. Google is betting that the buyers who already tested AI will find this more persuasive than another leap in model benchmark performance.

The employee front door

Google did not stop with the builder platform. It paired the technical layer with the Gemini Enterprise app, which it positioned as the place where the rest of the company actually uses agents. That distinction matters. Many enterprise AI programs break because the build environment and the user environment never really meet. A few specialists can create workflows, but ordinary staff never adopt them. Google’s answer is to make the app itself a controlled, shared environment for discovering, creating, running, and monitoring agents.

The new app features fit that goal. Google announced no-code Agent Designer, long-running agents managed through Inbox, shared Projects, and Canvas for editing Docs and Slides without leaving Gemini Enterprise. The product blog is explicit that Projects are meant to create persistent team memory, while Canvas is supposed to reduce tab switching and keep drafting inside the same AI workspace. Microsoft 365 export support is an equally practical signal. Google knows the enterprise does not turn pure overnight. It has to meet buyers where they already are.

This is where the keynote stopped sounding like classic cloud infrastructure and started sounding like a serious productivity platform bid. Google is trying to collapse several old boundaries at once: chat tool versus app, builder versus end user, assistant versus workflow engine, document editor versus orchestration layer. That is a much bigger ambition than “put Gemini in the sidebar.”

It is also risky. The more Google asks Gemini Enterprise to be the front door for employee work, the more it has to prove that people will actually stay there. Enterprise software history is full of products that promised a single pane of glass and became just another pane. The keynote demo tried to counter that skepticism by showing context transfer across sessions, handoffs into development tooling, and content creation that flows into Workspace rather than away from it. Google knows this is the weak point in any agent story: if the agent cannot survive contact with daily work, it becomes shelfware.

What the live demo actually proved

The furniture retailer demo was slick, but its real value was not the furniture. It was the chain of work. One prompt kicks off trend analysis, stale inventory detection, pricing recommendations, landing-page generation, a Jira handoff to development, then internal launch materials inside Workspace. That sequence captures Google’s actual thesis better than any single slide did. Agents are supposed to carry context across departments, tools, and time, not just answer a question well.

Google’s official Gemini Enterprise material backs up that direction. The app is supposed to let teams discover, create, share, and run agents in one place, with Projects creating persistent shared context and Canvas handling the artifact layer. The platform underneath then handles identity, governance, evaluation, and delivery. In other words, the demo’s flashier moments were only credible because Google had already spent time explaining the quieter parts below them.

The smart part of the demo was the reset between tasks. A marketer, a developer, and an operations lead do not live in the same software frame, yet the keynote wanted the audience to believe one agentic system can hand work between them without losing business context. That is more ambitious than traditional workflow automation, which often preserves task order but not judgment. Google’s promise is that an agent can keep the goal, the evidence, the brand rules, the files, and the handoff chain intact.

That promise is also where skepticism is healthiest. Demos hide ambiguity. Production work creates it. Inventory data is dirty. Brand rules contradict one another. Tickets are incomplete. Teams reject machine-generated work that feels slightly off. The keynote knew that, which is why it leaned so hard on governance, approval, observability, and source visibility. Google is not claiming autonomy without supervision. It is claiming higher-order delegation with tighter instrumentation. That is a more believable claim.

Customer proof and executive theater

Enterprise keynotes live or die on customer evidence, and Google packed the first hour with it. Signal Iduna, Bosch, KPMG, Virgin Voyages, Walmart, NASA, Honeywell, Liverpool, Citi Wealth, and Team USA all appeared in one form or another. Google’s separate customer roundup broadens that list with companies such as Capcom, Home Depot, Mars, and others moving from experiments to agent-driven operational use cases. The point was not subtle: Google wants buyers to think the market has already crossed the line from pilots to operating systems.

Some of the examples were stronger than others. Walmart’s story worked because it was plain. Put store leaders on a foldable device connected to enterprise data and help them spend less time in an office and more time on the floor. That is legible. Virgin Voyages also worked because the operational problem was concrete. Google’s own recap says the company is using more than 1,000 specialized agents, including one that cut mass itinerary rebooking from six hours to 11 minutes. That is a number people can picture.

The sports section with Shawn White looked different, but it still served the same enterprise purpose. It showed that Google can analyze high-speed, multimodal data, generate three-dimensional pose understanding from flat video, and make that output useful to experts and non-experts at once. Even the entertainment pieces were doing enterprise work. They were selling multimodal infrastructure, reasoning, and visualization under a more emotional wrapper.

What the customer parade really did, though, was social proof the phrase “agentic enterprise.” That phrase is still not natural language for most people outside the vendor ecosystem. It sounds coined, because it is. Customer examples are how Google tries to make the term feel earned rather than invented. By the end of the first hour, the message is clear: you do not have to love the phrase, but Google wants you to accept the category.

Compute as a system, not a chip

The AI Hypercomputer segment was the keynote’s most technical stretch, and also one of its most important. Google’s claim was blunt: in the agent era, compute is no longer defined by a chip; it is defined by the whole data center. That is not just technical pride. It is a positioning move against a market that still tends to reduce AI infrastructure to accelerator headlines. Google wants buyers to think in systems, because systems are where Google can differentiate.

The headline announcement was the eighth generation of Google TPUs, split into two chips: TPU 8t for training and TPU 8i for inference. Google says TPU 8t scales to 9,600 chips and two petabytes of shared high-bandwidth memory in a single superpod, delivering 121 exaflops of compute and nearly 3x the compute performance per pod over the previous generation. TPU 8i is tuned for latency-sensitive reasoning and inference work, with 3x more on-chip SRAM than the prior generation and 80% better performance per dollar for inference. Google’s TPU launch article makes the design split explicit: training and serving have diverged enough that one chip no longer fits both jobs especially well.

That is the right argument for 2026. Early generative AI spending put huge attention on training frontier models. The agent story shifts the center of gravity. Enterprise buyers now care just as much about long-context inference, concurrency, throughput under load, memory behavior, start-up times, and predictable costs for systems that are always on. TPU 8i is Google’s answer to that shift. It is the part of the keynote that most clearly says the next AI race is not only about making models larger. It is about making reasoning systems usable at industrial volume.

Training and inference stop looking alike

WorkloadGoogle’s answer at Next ’26
Frontier trainingTPU 8t with large shared memory, higher interchip bandwidth, and Virgo-assisted scale-out
Agent-heavy inferenceTPU 8i with more on-chip SRAM, lower-latency collectives, and better inference economics

This is one of the keynote’s most honest ideas. Training and inference used to be discussed as neighboring problems. Google treated them as different economic and physical systems, which is a better fit for what enterprise AI has become.

The infrastructure claims that mattered

The chips were only part of the infrastructure story. Google also used the keynote to talk about the surrounding mechanics that keep large AI systems from stalling: networking, storage, utilization, and orchestration. Virgo Network was the standout. Google says Virgo can connect 134,000 TPUs inside a single data center and more than one million TPUs across multiple sites, while also supporting large-scale NVIDIA Vera Rubin deployments. Those are immense numbers, but the real message is simpler: Google wants to be seen as the cloud that can still scale when the cluster becomes absurd.

Storage got nearly as much attention, which was smart. AI buyers have learned that storage bottlenecks can erase expensive accelerator gains. Google said Managed Lustre now delivers 10 TB/s of bandwidth, a 10x jump from the previous year, and that Rapid Bucket on Cloud Storage can exceed 15 TB/s in a single zonal bucket with sub-millisecond latency. Those details may sound dry on stage, but they are exactly the kind of details infrastructure buyers look for when the model is no longer the only bottleneck.

A subtler but more interesting claim sat underneath all this: Google is selling utilization, not raw performance. The TPU 8t launch talks about more than 97% “goodput.” The infrastructure post emphasizes removing scaling tax, reducing GPU blocked time, and keeping training utilization above 95%. That language reveals what sophisticated buyers are already thinking. Peak numbers are easy to headline. The expensive part of AI is idle time, stalled jobs, failed checkpoints, and clusters that look large on paper and underperform in use. Google’s infrastructure messaging felt credible because it stayed close to those operator headaches.

That helps explain why the hypercomputer section landed more cleanly than many cloud hardware segments do. It was not presented as engineering for engineering’s sake. It was framed as the physical layer required for agents that must reason, act, wait, retry, coordinate, and remain available without making the economics collapse.

Data stops being a warehouse and becomes context

Karthik Narain’s section carried one of the keynote’s strongest lines: reasoning without context is just a guess. That is the cleanest summary of the entire Agentic Data Cloud pitch. Google is arguing that the hard part of enterprise AI is no longer just generating language. It is grounding actions in the right business meaning at the right moment across structured and unstructured sources. The official announcement for the Agentic Data Cloud says Google is rethinking its data platform around a universal context engine, agent-first practitioner experiences, performance improvements, and cross-cloud access.

The flagship piece is Knowledge Catalog. Google says it aggregates context from BigQuery and partner platforms such as Palantir, Salesforce, SAP, ServiceNow, and Workday, while also extending into unstructured data through Smart Storage. Files landing in Cloud Storage can be tagged and enriched automatically, and the catalog uses usage logs and profiling to learn how the enterprise actually uses data, not only how that data is formally described. Search and retrieval are then treated as a new query path, built with semantic and lexical matching plus re-ranking.

That is a serious shift in emphasis. Old enterprise data platforms were built for analysts, dashboards, and later machine learning teams. Google is saying the new primary user is the agent. If that becomes true, the data layer has to answer different questions. Not “can a human analyst find this table,” but “can an agent interpret ‘net revenue’ correctly in this business unit, retrieve the relevant evidence, and act on it without inventing missing semantics.” That is a much harder bar.

The other big move was Cross-Cloud Lakehouse. Google’s lakehouse announcement says the new system is built on Apache Iceberg, adds managed Iceberg support, and lets BigQuery reason over data in AWS and Azure without forcing data movement first. For buyers with multicloud reality rather than multicloud slogans, that is one of the most commercially important announcements in the keynote. Google is trying to win AI work even when the data does not live on Google first.

The five-minute froyo demo and Google’s data argument

The frozen-yogurt demo could have been forgettable. Instead, it was one of the keynote’s clearest statements of value. A trend appears. The team needs to know whether a flavor is safe, what the hidden allergens are, where the demand sits, and whether the launch pencils out financially. The demo’s point was not dessert. It was dark data, cross-cloud joins, and code generation under supervision.

Google’s own recap breaks the pitch into four pieces: Knowledge Catalog for building a business context graph, Data Agent Kit for a Gemini-powered data science authoring experience in IDEs and notebooks, Lightning Engine for Apache Spark, and Cross-Cloud Lakehouse for querying AWS and Azure data without copying it. The Agentic Data Cloud announcement adds two useful performance markers: Lightning Engine can be up to 4.5x faster than open-source alternatives and up to 2x better in price-performance than the leading competitor for large datasets.

The important part is not the speed number by itself. It is the workflow shape. The demo assumes a human still approves the plan, changes the simulation count, and reviews the notebook. That is a much more persuasive picture than full autonomy. It suggests a world where data practitioners stop spending so much time on glue work and spend more time on judgment. Google’s Data Agent Kit write-up leans into that exact idea, turning IDEs, notebooks, and terminals into places where agents help assemble pipelines and models directly inside the working environment.

There is also a quiet strategic point hiding here. If Google can make BigQuery and the lakehouse the place where agent-ready business context gets assembled, then Gemini becomes harder to dislodge. The data layer and the agent layer start to reinforce each other. That is a powerful position. It is why the data section felt less like a database update and more like a bid to define where enterprise AI gets its facts.

Security at machine speed

The security section was one of the keynote’s strongest because it started from a real change in attacker behavior instead of a vague fear narrative. Google’s security blog for Next ’26 says M-Trends 2026 found that the time from initial access to hand-off to a secondary threat actor has fallen from eight hours to 22 seconds in three years. It also says Google is introducing three new agents in Security Operations: Threat Hunting, Detection Engineering, and Third-Party Context. The same post says the Triage and Investigation agent processed more than 5 million alerts in the last year and cut a typical 30-minute manual analysis to 60 seconds.

Those are the right numbers to lead with because they explain the rest of the pitch. If attack speed has compressed that far, human-only operations are structurally too slow. Security becomes another domain where Google can argue that agents are not just useful, but necessary. The keynote’s phrase “machine speed” sounds dramatic, but here it has real logic behind it.

The integrated dark web intelligence announcement sharpened that case. Google says the preview system can analyze millions of daily external events with 98% accuracy to elevate the threats that matter most. Paired with Mandiant, VirusTotal, Chrome telemetry, and Google’s own threat intelligence footprint, the product story becomes less about a single AI feature and more about the advantage of Google-scale visibility. That is a familiar Google move, but it is still potent in security.

The larger point is that security was not parked off to the side of the keynote. It was treated as one of the five layers of the blueprint. That matters. For years, enterprise AI presentations often treated security as the last slide before the Q&A. Google treated it as part of the architecture itself. That is exactly what skeptical buyers wanted to hear.

Wiz, shadow AI, and the politics of control

The Wiz segment widened the security argument from classic cloud posture management into the messier territory of AI sprawl. Google’s security announcement says Wiz, now part of Google Cloud, extends protection across Google Cloud, AWS, Azure, Oracle Cloud, SaaS environments such as OpenAI, and agent studios including Gemini Enterprise Agent Platform, Microsoft Copilot Studio, Salesforce Agentforce, and AWS Agentcore. It also highlights AI-APP, Wiz Security Agents, and AI-BOM for visibility into models, frameworks, and shadow AI tooling.

That matters because “shadow AI” is turning into the same kind of executive headache that shadow IT once was, only faster. Employees can wire together models, agent frameworks, browser extensions, IDE integrations, and third-party plugins without waiting for central approval. Enterprises need a way to see what exists before they can govern it. The AI-BOM idea is a blunt answer to that problem: inventory first, policy second.

The live Wiz demo in the keynote worked because it showed the security graph, the external exposure, the validated risk, and the path to remediation in one chain. It made the risk concrete. An internet-exposed agent with access to sensitive data is not an abstract governance problem. It is a breach waiting for a timing mistake. The red, green, and blue agents narrative was theatrical, but the substance underneath it was solid: detection is not enough unless it closes into prioritized remediation.

There is a larger political message here too. Google wants the enterprise to believe that its agent story is open where buyers want openness and strict where buyers want control. That is the same balance the whole keynote tries to strike. Freedom in models, clouds, and partner ecosystem. Tight governance in identity, security, and policy enforcement. Whether buyers believe that balance holds in practice will shape how much of this keynote turns into real spend.

The case for an agentic task force

The “Agentic Taskforce” label is pure keynote language, but the underlying idea is simple enough: Google wants pre-built and semi-built agents to handle customer-facing and employee-facing work at scale. The customer experience side of the pitch included shopping agents, food ordering agents, Agent Assist, Omnichannel Gateway, and CX Agent Studio. Google’s Next ’26 day-one recap says Omnichannel Gateway helps agents maintain context across web, mobile, and voice so the conversation survives as customers move between surfaces.

That is the most mature part of Google’s task force story. Customer service already has queues, intents, scripts, escalation logic, and measurable outcomes. It is a natural place to pitch autonomous systems. The YouTube TV demo made the point well: voice support, plan explanation, multilingual switching, text-message follow-up, and business-rule changes handled in one environment. The keynote then showed how a team could update that system in CX Agent Studio without rewriting everything by hand.

The employee side of the “task force” argument is more ambitious because it overlaps with the broader Gemini Enterprise and Workspace story. Google is trying to make one idea cover many things at once: internal research agents, developer agents, operations agents, customer service agents, commerce agents, and productivity assistants. That breadth is impressive, but it also creates a risk. The wider the category gets, the easier it becomes for buyers to wonder what exactly they are purchasing. Google’s answer is to keep pulling everything back to the same platform and governance frame.

The strongest proof points came from customer examples again. Home Depot’s Magic Apron, Reliance’s shopping assistant, and Papa Johns’ ordering work were not positioned as clever experiments. They were described as direct contributors to customer journey, conversion, retention, and service continuity. Google wants the word “agent” to sound revenue-adjacent, not just tech-adjacent.

Workspace as the last-mile interface

The Workspace Intelligence segment may have been the most important piece for actual user adoption. Google Workspace’s new announcement describes Workspace Intelligence as a secure, real-time understanding layer across apps, projects, collaborators, and domain knowledge. It supports AI Inbox, AI Overviews in Gmail search, and more context-aware generation inside Docs, Slides, and Sheets. Google’s separate Workspace recap adds a business claim that Workspace now serves more than 3 billion users and over 13 million customers.

That scale matters because it explains why Workspace was not a side note. Google does not want Gemini Enterprise to end at the technical team. It wants the last mile of enterprise AI to run through the tools people already open all day. The demo in Chat and Slides hammered that point: alerts, file retrieval, source discovery, briefing creation, sales data pulled into a deck, citations visible, then team sharing without leaving the workflow.

This was a smart section because it grounded the agent story in a familiar pain: too many tabs, too many files, too much context hunting. Google’s Workspace pitch is not really about magical drafting. It is about reducing the tax of switching systems to gather information before the real work starts. That is a much better enterprise productivity story than generic writing assistance.

The Microsoft angle was notable too. Google says migration from Microsoft 365 is now up to five times faster with new data import tooling, while interoperability features such as Office macro conversion, Office file editing in Gmail, and redlining in Docs are meant to ease coexistence. That is not just product polish. It is competitive strategy. Google knows plenty of companies will not rip out Microsoft in one move, so it is trying to make Gemini and Workspace easier to adopt inside mixed environments.

The partner story behind the keynote

Near the end, Kurian returned to a theme that had run quietly through the keynote from the start: openness. Google cast itself as the vendor that offers an integrated stack without demanding a closed one. That argument showed up in model support, MCP support, cross-cloud lakehouse, partner agents, security coverage across clouds, and partner services around deployment. Google’s partner-built agents launch said those agents are now available directly inside the Agent Gallery and paired that move with a $750 million partner fund for agentic development.

This is not just ecosystem window dressing. Enterprise AI is still too messy for one vendor to own every domain workflow. Buyers know that. Google knows that. So the open-ecosystem posture is less about ideology than about shortening the path to credible use cases. Oracle, Salesforce, ServiceNow, Workday, Atlassian, and others already hold business logic that companies trust. Google would rather pull that logic into Gemini Enterprise than pretend it can replace it overnight.

The MCP emphasis fits the same pattern. Google had already been moving toward managed MCP support for Google services, and the keynote folded that into the platform narrative by making agents easier to connect to tools and APIs in a governed way. That is one reason the keynote kept feeling more operational than visionary. Google is trying to remove friction from the boring parts that kill enterprise rollouts.

There was also a subtler strategic contrast with rivals. Google wants to sound open against vendors perceived as closed, full-stack against vendors perceived as fragmented, and production-minded against vendors still associated with chatbot-era experimentation. The keynote never said that outright. It did not need to.

What Google Cloud Next ’26 changed

By the close, the keynote had made its case. Whether you buy every claim is a separate question. The shape of the argument was the real news. Google is no longer pitching enterprise AI as a collection of model features. It is pitching it as a company-wide operating layer. The five-part blueprint it repeated all morning—AI Hypercomputer, Agentic Data Cloud, Agentic Defense, Gemini Enterprise Agent Platform, Agentic Taskforce—was not elegant language, but it was coherent. Each layer answered a practical executive question. Where does the compute come from? Where does the context come from? Who secures it? Who builds and governs it? Where does it show up in work?

The keynote also showed that the enterprise AI market is maturing in a specific direction. Buyers are asking harder questions now. Not whether AI can draft text. Not whether it can answer a FAQ. They are asking whether it can survive procurement, identity, cost management, cross-cloud data, threat exposure, audit requirements, and the daily habits of people who still live in chat, docs, tickets, and email. Google aimed the whole keynote at those questions.

That is why this event felt different from the first generative AI wave. The early wave sold possibility. Cloud Next ’26 sold coordination. That is a less glamorous word, but it is the one that matters if agents are really going to move from pilot projects to the center of enterprise work.

Questions readers will ask after the Google Cloud Next ’26 keynote

What was the main message of the Google Cloud Next ’26 opening keynote?

The keynote argued that enterprise AI has moved beyond experimentation and now needs a full operating stack for agents, data, security, governance, and daily employee use.

What does Google mean by the “agentic enterprise”?

Google uses the term to describe organizations that do more than deploy AI assistants. In its framing, agents can reason across business context, take action across systems, and work inside governed enterprise controls.

What was the biggest product announcement in the keynote?

The central launch was Gemini Enterprise Agent Platform, which Google positioned as its main environment for building, scaling, governing, and evaluating enterprise agents.

How is Gemini Enterprise Agent Platform different from a simple chatbot builder?

Google presented it as a lifecycle platform with Agent Studio, runtime, identity, gateway, registry, evaluation, observability, and security controls. The emphasis was on production systems, not just prompt interfaces.

What is the Gemini Enterprise app supposed to do?

It is the user-facing layer where employees can discover, create, share, and run agents in a controlled environment, with tools such as Agent Designer, Inbox, Projects, and Canvas.

What are Projects and Canvas in Gemini Enterprise?

Projects are shared workspaces for teams and agents with persistent context. Canvas is an in-app editor for Docs and Slides, with Microsoft Office export support added for interoperability.

What did Google announce on the infrastructure side?

Google introduced eighth-generation TPUs with two designs: TPU 8t for training and TPU 8i for inference, plus Virgo Network, storage upgrades, and expanded NVIDIA support.

Why did Google split its TPU story into TPU 8t and TPU 8i?

Google’s argument is that training and inference now have different physical and economic needs. Training favors scale and shared memory. Inference favors latency, memory behavior, and cost efficiency under heavy concurrency.

What is Virgo Network?

Virgo is Google’s scale-out network fabric for large AI workloads. Google says it can connect 134,000 TPUs inside one data center and more than one million TPUs across multiple sites.

What is the Agentic Data Cloud?

It is Google’s new name for its data platform strategy for agentic AI, built around Knowledge Catalog, Data Agent Kit, Lightning Engine for Apache Spark, and Cross-Cloud Lakehouse.

What problem is Knowledge Catalog trying to solve?

It is meant to turn scattered enterprise data, including files and unstructured content, into agent-ready business context with searchable semantics and richer grounding.

What is Cross-Cloud Lakehouse?

Google describes it as a BigQuery-led, Apache Iceberg-based lakehouse approach that lets organizations query data across Google Cloud, AWS, and Azure without first copying it into one place.

What did Google say about security in the agent era?

Google argued that security must operate at machine speed because attacker coordination is faster than before. It introduced new Security Operations agents and tighter integration with Wiz.

What new security agents did Google announce?

Google highlighted Threat Hunting, Detection Engineering, and Third-Party Context agents inside Google Security Operations.

Why was Wiz such a big part of the keynote?

Wiz gives Google a stronger cloud and AI security story across multiple environments, including better visibility into agent studios, AI applications, shadow AI, and code-to-cloud risk paths.

What is Workspace Intelligence?

Workspace Intelligence is Google’s new context layer across Workspace apps, files, collaborators, and ongoing work. It is meant to reduce information hunting and improve context-aware creation and retrieval.

Did the keynote say anything important about Microsoft 365?

Yes. Google said Workspace migration is now up to five times faster and added more interoperability features for mixed Google-Microsoft environments.

Was the keynote mostly about AI models?

No. Models mattered, but the keynote spent more time on platform controls, data grounding, infrastructure, security, and workflow delivery than on model benchmarks alone.

Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

Inside the Google Cloud Next '26 keynote and the blueprint for the agentic enterprise
Inside the Google Cloud Next ’26 keynote and the blueprint for the agentic enterprise

This article is an original analysis supported by the sources cited below

Google Cloud Next ’26 Opening Keynote
The full keynote video used to align the article’s chapter-based structure and timing.

Opening keynote The agentic cloud
Official session page describing the keynote as a blueprint for the agentic enterprise.

Welcome to Google Cloud Next ’26
Thomas Kurian’s adapted keynote transcript and Google Cloud’s main launch summary.

Google Cloud Next 2026 News and updates
Google’s central roundup page for Cloud Next ’26 announcements.

Cloud Next ’26 Momentum and innovation at Google scale
Sundar Pichai’s official recap, including Google’s internal AI usage claims.

Introducing Gemini Enterprise Agent Platform
Primary launch post for the new agent platform and its build-scale-govern-optimize model.

The new Gemini Enterprise one platform for agent development
Explains how the Gemini Enterprise app and Agent Platform fit together.

What’s new in Gemini Enterprise
Details long-running agents, Inbox, Projects, Canvas, and collaboration features.

Partner-built agents available in Gemini Enterprise
Covers the Agent Gallery, partner ecosystem, and Google’s partner fund.

Our eighth generation TPUs two chips for the agentic era
Google’s TPU 8t and TPU 8i announcement with the clearest technical breakdown.

AI infrastructure at Next ’26
Explains Virgo Network, storage improvements, and the wider AI Hypercomputer stack.

Next ’26 storage announcements
Official storage update covering Rapid Bucket, Smart Storage, and Managed Lustre.

What’s new in the Agentic Data Cloud
Google’s main data platform launch post for Knowledge Catalog and agent-ready context.

The future of data lakehouse for the agentic era
Details the cross-cloud lakehouse strategy and Apache Iceberg-based interoperability.

Unveiling new BigQuery capabilities for the agentic era
Explains BigQuery’s lakehouse, MCP, and cross-cloud analytics direction.

Next ’26 Redefining security for the AI era with Google Cloud and Wiz
Primary source for Google’s security claims, new agents, and Wiz integration.

Introducing Workspace Intelligence
Google Workspace’s launch post for the new real-time context layer across apps and data.

10 more announcements from Google Workspace at Cloud Next ’26
Workspace recap covering migration speed, interoperability, and related product changes.

Next ’26 day 1 recap
Google Cloud’s day-one summary with concise descriptions of the biggest launches.

Alphabet says capital spending in 2026 could double, cloud business booms
Used for external confirmation of Alphabet’s reported 2026 capex plan.

10 leading enterprises show why agents mean business
Google’s customer roundup used to assess how the keynote framed real-world adoption.

Joint statement from Google and Apple
Official statement supporting the keynote’s Apple-related claim about Gemini-based Apple Foundation Models.