
Image: Google Cloud, from Welcome to Google Cloud Next '26
Hello! I'm Rodrigo, and I work as a Principal Engineer at Reiwa Travel, leading technical solutions with a focus on AI adoption and security.
From April 22 to April 24, 2026, Google Cloud Next '26 was held at the Mandalay Bay Convention Center in Las Vegas. I went there in person, and in this post I want to share what I saw, what I heard, and what I learned by talking with engineers from many different companies around the world.
About Google Cloud Next '26

Image: Google Cloud, from Day 1 at Google Cloud Next '26 recap
Conference Overview
What is Google Cloud Next '26?
Google Cloud Next '26 is the biggest yearly event from Google Cloud. The theme this year was "Where big ideas become a reality" — a place to see how AI and cloud can turn ideas into real results.
- Dates: April 22 (Wed) – April 24 (Fri), 2026
- Place: Mandalay Bay Convention Center, Las Vegas
- Content: Keynotes, breakout sessions, hands-on labs, expo, Customer Engagement Center
Schedule
The event ran for 3 days:
- Day 1 (April 22): Opening Keynote, Expo Experiences, Welcome Happy Hour
- Day 2 (April 23): Developer Keynote, sessions all day, "Next at Night" party
- Day 3 (April 24): Sessions all day, closing
There were big keynote talks, smaller breakout sessions, hands-on labs, and a huge expo floor with hundreds of booths from startups and large companies.

Image: Google Cloud, from Day 2 at Google Cloud Next: A marathon developer keynote
The Big Message from the Opening Keynote

Image: Google Cloud, from Day 1 at Google Cloud Next '26 recap
You can watch the full Opening Keynote on YouTube: https://www.youtube.com/watch?v=11PBno-cJ1g
If I had to summarize the keynote in one line, it would be this:
"The age of chatbots is over. The age of AI agents that do the real work has started."
Google Cloud put all of its AI products under one new brand called Gemini Enterprise. The big idea is that companies should not just talk to AI — they should let AI agents actually run real business tasks. To make this safe and possible, Google announced many new tools.
Here are the 3 points I think are most important:
1. Building agents is now a "platform"
Google announced a full set of tools to build, run, and manage AI agents:
- Agent Designer v3 — for non-engineers to build agents using natural language
- Agent Development Kit (ADK) 2.0 — for developers to build complex agents
- Agent Runtime — to run agents fast (under 1 second start time, up to 3,000 agents per project)
The important change is this: agents are no longer "personal tools". They are now shared software that the whole company manages, like any other system.
2. Security must now move at "machine speed"
In his keynote talk, Francis D'Souza shared some scary numbers about how fast attacks have become:
- Time from a bug being public to being attacked: minus 7 days (attacks happen before the patch is even released)
- Time from first attack to handing over to the next attacker group: from 8 hours down to just 22 seconds
Humans cannot defend at this speed anymore. So Google announced Agentic SOC — a security operation center where AI agents do the work:
Triage and Investigation Agent— checks new alerts and finishes the analysis in 1 minute (used to take 30 minutes)
Detection Engineering Agent— automatically writes new detection rules
Threat Hunting Agent— proactively looks for hidden attackers
To protect the agents themselves, Google also announced:
Agent Identity— gives every agent a unique ID
Agent Gateway— controls all agent-to-agent and agent-to-tool communication
Model Armor— blocks prompt injection and data leaks
3. Data systems are being redesigned for agents, not humans
The data tools are also changing.
Knowledge Catalog (the new name for Dataplex) has become a "Universal Context Engine". It can read structured data (BigQuery), unstructured data (PDFs, images), and SaaS data (SAP, Salesforce) — all without copying or moving the data.BigQuery also got many new features (
BigQuery Graph, BigQuery Measures, Fluid Scaling) so AI agents can use it without making mistakes, even at very high scale.The clear direction is: moving from "humans look at dashboards" to "agents read data, decide, and take action".
The Keynote Is Just the Beginning
The keynote was full of important announcements, but I quickly learned that the value of Google Cloud Next goes much further than what happens on the main stage.
The venue had:
- The Customer Engagement Center, where you can talk directly to Google experts
- The Expo floor, where hundreds of partners and startups show their products
- Many discussion groups, happy hours, and informal meetups
I spent a lot of time in these places. I focused on three things:
- Talking with engineers from small, medium, and large companies to learn how they handle AI adoption and security today
- Walking the Expo floor and seeing the wide ecosystem of partners and startups building on top of Google Cloud
- Talking directly with Google experts about my real problems at NEWT and getting their feedback
I had the chance to talk with many local companies and engineers from different regions, which helped me collect a wide range of views instead of just one perspective.
In the next sections, I will share the most important things I learned, focused on two topics: AI Security & Governance and AI Adoption Strategies.
Key Topic #1: AI Security and Governance

Image: Google Cloud, from Day 1 at Google Cloud Next '26 recap
Everyone Is Fighting "Shadow AI" First
The number one topic in almost every conversation was Shadow AI — when employees use AI tools that the company has not approved or even knows about.
Medium and large companies are doing these things to fight Shadow AI:
- Block by default — If a tool is not approved by the company, it is blocked for everyone
- Approval process — If an employee wants a new AI tool, they must apply to the security team first
- Approved list only — The company picks the tools it needs and blocks the rest
- Track everything — All actions by humans and by AI agents are logged (often using OpenTelemetry)
The "let's give every engineer a free AI tool and see what happens" phase is already over at large companies. They have moved to "we know exactly who, with which agent, used which data, when".
Agent-to-Agent Communication Is Now a Security Topic
Another big change: security is no longer only about people. It is about agents too.
Large companies are doing things like:
- Giving each agent its own ID (similar to
Agent Identity)
- Controlling how agents talk to each other (similar to
Agent Gateway)
- Setting clear rules about which agent can access which data
- Blocking prompt injection, data leaks, and Shadow AI access (similar to
Model Armor)
The new way of thinking is: "agents are not just helpful — they are a new attack surface". This was a clear shift in how the most advanced companies design their security.
Move Work from Local PCs to the Cloud
From the conversations I had, I could see a clear shift: companies are moving heavy, risky work away from local laptops and into the cloud whenever possible.
The pattern looks like this:
- Employee writes a message in a chat UI
- The message goes to a cloud agent through MCP
- The agent runs in a safe, isolated cloud environment (not on the laptop)
- The result is sent back in chat, or saved to Google Drive
The simple idea behind this is: "don't keep risky work on the employee's laptop". It is simple but very strong.
Key Topic #2: AI Adoption Strategy
After security, the next big topic was: how do you actually get your company to use AI?
Here are the patterns I saw most often, both in conversations on the floor and in sessions like "Developer AI: Tools, tactics, and team adoption" by Aja Hammerly and Maja Bilic from Google Cloud, which gave a useful framing of AI adoption in 3 phases: Evaluation, Rollout, and Adoption.
The "AI Champions" Program
The pattern I heard the most — and one that the session also walked through in detail — was the AI Champions model. Companies pick 1 or 2 Champions inside each department — not always the most technical engineers, but people who are trusted by their team and excited about AI. The 3 traits that came up everywhere were: being trustworthy, being excited about AI, and being curious about different tools.
Champions are responsible for:
- Helping their team start using AI tools
- Finding good use cases and turning them into templates
- Answering common questions from teammates
Above the Champions, there is usually an "AI Transformation Team" or "Champion Leads" group that decides on the strategy, budget, and security rules. This 3-layer structure (Lead → Champion → Engineer) lets adoption scale naturally instead of being pushed top-down by a single team.
Role-Based Access Control (RBAC) for AI Tools
Companies are not giving the same AI tools to everyone. They give different tools to different roles. Here is the pattern I heard most often:
Role | What they get |
General staff | Just a chat UI (basic Gemini) |
Managers | Chat UI + access to internal company data |
Developers | Coding agents (like Cloud Code, Gemini CLI) |
Marketing | Tools made for campaigns |
AI Champions | Permission to build and test new agents |
The reason is simple: giving an expensive coding agent to every employee is not optimal — for cost or for security. Match the tool to the work.
Agent Templates Shared Across the Company
When one team builds a useful agent, companies do not keep it within that team. Agents are shared across teams so other members can use them directly, and they also serve as templates that other teams can adapt to their own needs.
This way, automation built in one team can be reused by another team quickly, common patterns spread naturally across the company, and leaders can see what each department is automating.
A Knowledge Graph for Company Information
Another strong pattern was that companies want better control over their company knowledge. To achieve this, they build a Knowledge Graph that combines information from Notion, Slack, internal documents, and other systems into one place. Employees and AI agents can then access company information through it.
Access privileges from each original source are always respected — if a person cannot see a Notion page directly, neither they nor their agents can see it through the Knowledge Graph. This gives companies one clear place to manage access, track usage, and understand what data their AI is reading.
One Strong Reminder from the Session
One line from Aja and Maja's session stayed with me: "Change is constant." Whatever tools and process you build now, you have to keep adapting. AI development moved from "copy-paste from Stack Overflow" in 2022 to agentic development today, and it will keep evolving. Don't try to solve everything once — design your governance to be reviewed and updated often.
A Rich Ecosystem: Google Cloud and Its Partners
Another thing that left a strong impression on me was how wide the ecosystem around Google Cloud has become.
Many of the new products Google announced are powerful platforms that companies will adopt step by step over the coming year as they reach general availability. At the same time, the Expo floor showed a vibrant ecosystem of partners and startups that are already solving pieces of the same problems today — Knowledge Graphs, agent platforms, AI security, A2A communication, and more.
This is actually very healthy for customers like us. It means companies thinking about AI strategy have flexible paths:
- Adopt Google's platforms as they become generally available, with the benefit of deep integration across the Google Cloud ecosystem
- Use partner or startup solutions today for parts where you want to move quickly, with a plan to integrate with Google's platform later
- Mix both — combining Google's roadmap with specialized partners where it fits
What I felt clearly is that Google Cloud is not just a product — it is a platform plus a partner ecosystem, and the value comes from how you combine the two for your business.
What I Want to Bring Back to NEWT
Finally, I want to share three things I want to start working on at Reiwa Travel / NEWT, based on what I saw.
1. Identify and Empower AI Champions
At NEWT we already have +100 staff — both engineers and non-engineers — who are using AI heavily and getting great results. These are the people who should become our Champions and role models for others. What we are missing is the explicit framework to recognize them as Champions, give them time to lead, and connect them across teams. Building a Champion Leads → Champions → Team Members structure would let us scale AI adoption naturally instead of pushing top-down.
2. Build the "Shape" of AI Governance
With +100 employees actively using AI tools, we need a clear picture of "who is using which tool, skill, agent, or MCP — to access which data".
We have already started some pieces — we recently introduced monitoring of the tools, skills, agents, and MCPs everyone is using, along with reviewing MCP servers for security. The next step is to connect these into one clear governance story, including RBAC and shared agent templates.
3. Build More Internal MCPs and Cloud-Based Agents
The direction is clear from the industry: less work on local laptops, more work running safely in the cloud. For NEWT, this means building more internal MCPs to safely connect our staff with company data, and creating agents that do more work in the cloud for us, instead of running heavy or risky processes on each person's laptop.
Closing Thoughts
The strongest thing I felt at Google Cloud Next is this: the discussion of "should we use AI or not?" has already been settled in the rest of the world.
The questions companies are asking now are different:
- How do we put agents into the company safely?
- How do we get every department to use AI well?
- How do we move agent work from laptops to the cloud?
- How do we redesign our data systems so agents can use them correctly?
These are not just questions about buying new SaaS. These are questions about how we design the way our company works.
Many of the products Google announced will roll out gradually over the coming year. Seeing the direction that Google Cloud and the broader industry are moving was the biggest value of this trip for me.
At NEWT, we want to take quick action to step-by-step build our governance, RBAC, agent platform, and data systems. If your company is also thinking about this, I would love to talk and exchange ideas 🙌
About "NEWT Tech Talk"
At Reiwa Travel, we hold a monthly LT (lightning talk) event to share technical knowledge. Anyone interested in the topic or in our company is welcome to join.
For our regular tech events, please follow us on connpass and don't miss the latest updates!
We Are Hiring at Reiwa Travel!
If this post made you interested in our company or our product, please reach out — we would love to hear from you!
If you just want to chat informally first, we also offer casual interviews. Feel free to contact us anytime.
See you in the next blog! Have a nice trip ✈️

