Google I/O 2025: AI Mode and Gemini rewrite Search (and SEO)
The future of Google Search is “search that goes beyond information to intelligence.” In fact, it is a “total rethink” of the very concept of search. On stage (and in statements on the sidelines) at Google I/O 2025, Sundar Pichai, CEO of Alphabet, and Elizabeth Reid, head of Google’s search engine, outlined a clear vision: a conversational, continuous experience, enhanced by artificial intelligence, which will become the benchmark for tomorrow’s search. The first concrete step, available now in the United States, is AI Mode: a new tab in search results where answers are not limited to listing links but summarize content, show images, graphs, sources, continue the conversation, and offer suggestions. This transformation involves Gemini, Chrome, shopping, productivity tools, development, and content. It also opens up new scenarios—interesting, but also complex. What role will websites play in this new ecosystem? How will you get visibility if Google responds directly? Who will drive traffic if interaction moves away from traditional links?
How search is changing (and why it matters to us)
From an event for developers to a moment when Google shows the direction it intends to take for the future of the web. In recent years, Google I/O has taken on an increasingly central role in the agendas of professionals, and the themes of this year’s edition closely affect those who work every day with visibility, content, advertising, and online commerce.
Among the over one hundred announcements, there are several that signal a profound change: interactions are becoming more guided, the user experience more filtered, and classic link-based search is transforming into something more fluid, conversational and—above all—managed by artificial intelligence.
Some of these new features are already active in the United States, while others are being tested in Labs. However, they all point in the same direction: AI is no longer an accessory within Google. It is the new starting point.
A new face for Google Search: AI Mode and other announcements
For those involved in SEO, marketing, advertising, or content, Google I/O 2025 marks the beginning of a new phase, not so much because it introduces new technologies, but because it makes a different way of conceiving the online experience operational.
Searches are no longer limited to returning results: answers are constructed, integrated, and personalized by AI, potentially making classic SERPs less and less present and visible. User paths become less linear, less clickable, and more guided by suggestions, summaries, and conversational interactions. Artificial intelligence enters the key points of digital visibility: writing, searching, navigation, and conversion.
The words of top executives confirm this: Google’s ambition is to change the very way people search online. With the introduction of AI Mode, the classic SERP is joined by a new tab where answers come in conversational form: the user dialogues with the engine, receives summaries of content, images, links, tables, and can ask follow-up questions. Google talks about ‘query fan-out’, i.e. breaking down complex questions into several sub-queries and offering a single integrated report, ‘conversing’ with the engine, asking, clicking, suggesting and obtaining tailored answers.
Supporting all this are more powerful models (Gemini 2.5 Pro), capable of handling complex requests, generating reports, interpreting images, and combining different inputs. For those who work with content, visibility, advertising, or e-commerce, this means a new dynamic of interaction with Google: more guided, more closed, perhaps less predictable.
The 5 innovations that are (immediately) revolutionizing digital work
But let’s go beyond the headlines and special effects and take a closer look at the innovations that affect those who work in the digital world—in short, what we will soon be dealing with in various capacities and at various levels, with all the implications that this entails.
- AI Mode in Search
The central innovation of the entire Google I/O 2025 event: an alternative mode on the search results page, where users can ask questions and receive answers generated directly by Gemini. The results are visually rich: there are text summaries, images, quotes, insights, and the possibility to ask further questions to continue the conversation. This is not an isolated experiment: Google has announced that this will be the main direction of search in the near future. Linked sources remain, but the experience is entirely new. It is currently available in English in the US.
- Deep Search
Within AI Mode, Deep Search is available for complex queries: multi-format reports with citations, contextual analysis, and visualizations. It allows users to write a long query and receive a comprehensive response in return, structured as a mini-report with sections, sources, data, graphs, and references. It is useful for those seeking in-depth information—and represents a paradigm shift for those who produce informational and analytical content. Currently available as a test in Google Labs.
- Search Live (Project Astra)
An evolution of Lens towards ‘intelligent’ interaction in the field. With this feature, the camera becomes the new starting point for search: just frame a real object or scene to receive a response from AI with explanations and context. It is an extension of Google Lens that leverages the new Gemini models to interpret the environment in real time. The project is part of the development of the “permanent” AI assistant that Google envisions for the coming years.
- AI-assisted shopping
From virtual clothing trials to price tracking with automatic purchasing, shopping is now guided by an assistant. The new shopping experience, based on AI Mode + Shopping Graph, includes the ability to virtually try on clothing items in your own photos and receive personalized recommendations. But that’s not all: an AI agent can monitor prices, compare options, and even complete the purchase when it finds the best deal (within the set budget). This is a radical change for the world of e-commerce, with implications for online advertisers as well.
- Gemini in Chrome
The Chrome browser becomes an active and intelligent space. Integration with Gemini allows users to receive summaries of page content, explanations of terms or concepts, and useful suggestions while browsing. In the future, it will also be able to handle more complex actions such as filling out forms, interacting with sites, and coordinating multiple tabs simultaneously. For now, it is only available to those with a Pro or Ultra subscription, but the rollout is expected to expand—a clear sign of the company’s future direction.
An increasingly AI-native ecosystem
In addition to the big announcements about Search, Google has shown an increasingly broad vision of artificial intelligence as a cross-cutting infrastructure, capable of entering creative processes, everyday productivity, software development, advertising, and even interface design.
Google is therefore also focusing on AI-based tools, those used to write, draw, design, and create products and content.
Artificial intelligence is becoming the invisible engine of a new way of working: less input, more output. All it takes is a prompt to get a layout, image, video, interface, or logical structure. Some of these tools are already being tested or are accessible in Labs, others will come later, but the principle is now clear: creativity and design are becoming automated, and those who work in the digital world will need to know how to guide them.
The five most innovative tools presented at Google I/O 2025
Let’s take a quick look at five of the tools announced at the annual conference, which we could call “AI everywhere”—creativity, interfaces, advertising.
- Veo, Imagen 4, and Google Flow: creative AI for images and videos
Visual content generation is entering a new phase: with Veo 3, you can create realistic videos from text or visual prompts, with support for elements such as music, sounds, and complex movements. Google is proposing it as a solution for storytelling, promotions, and visual explanations. Imagen 4 improves photographic quality and context understanding, while Flow (in its initial phase) allows you to design multi-format content from a simple request: a post, a podcast, an infographic, a product sheet. These tools are designed for creators, marketers, and content strategists, who can save time and multiply their output.
- Stitch: design and UI generated from prompts
One of the most innovative (but still little-known) tools is Stitch, a technology that allows you to generate user interfaces, layouts, and code from textual descriptions or visual sketches (and no longer pixel by pixel). It is intended for designers and developers, but will also have an impact on the design of funnels, landing pages, eCommerce, and digital publishing products.
- Canvas and AI tools for developers and teams
The Labs package also includes tools designed for product and development teams: Canvas is a collaborative environment for designing ideas visually (a space for virtual visual brainstorming), while other tools allow you to create quizzes, educational content, micro-apps, simulations, or interactive experiences generated with AI. These tools are designed for product, development, and education teams, but they also open up engagement scenarios for those working in content marketing.
- AI-native advertising and conversational context
Google has confirmed that it is testing new advertising formats within AI Mode. This means that conversations with the search engine may also include sponsored suggestions, contextual ads, and product links. It is not yet clear how they will be presented and how distinguishable they will be from the rest of the responses, but the idea is already being tested. For those involved in performance marketing, this will be a new space to understand (and conquer), as it profoundly influences the way campaigns and funnels are planned.
- Beam: the next frontier in smart video calls
Originally known as Project Starline, now renamed Beam, this experimental project represents a quantum leap in remote communication: a physical station equipped with six cameras and AI technology capable of reconstructing faces and environments in 3D, creating a realistic sense of presence during video calls. It is not intended for mass consumption (at least for now), but it demonstrates where Google wants to go: integrating AI, computer vision, and natural interaction to redefine even long-distance relationships. For those working on digital events, immersive experiences, or remote assistance, it is a strong signal.
The most interesting announcements for the general public (and for those who don’t do SEO)
Not all the new features announced at Google I/O are designed for digital professionals, developers, or insiders. Some directly affect the experience of millions of people—from the way we use our phones to our browsers and video calls—attempting to redefine how we all interact with technology in our daily lives.
The traditional voice assistant is giving way to a much more advanced form of interaction: Gemini Live, already available on Android and iOS devices, allows you to talk to artificial intelligence in a natural way, share your screen in real time, and receive visual or contextual assistance on any app or content. This is a huge leap forward from the old Google Assistant, transforming your phone into a conversational and proactive tool.
For example, just open a document or a complex screen and ask the AI, “Explain what this part means” or “What should I do now?” Gemini analyzes what it sees and responds instantly, providing contextual guidance and suggestions, including voice prompts.
Making all this possible are the new Gemini 2.5 models, available in Pro and Flash versions. The first, more powerful, is designed to handle complex questions, multimodal requests, and articulated reasoning. The second is optimized for quick, fluid, and lightweight responses. Both are at the heart of Google’s AI experience, integrated into products, apps, and features.
Not surprisingly, the company has also introduced two subscription plans: AI Pro ($20 per month) and AI Ultra ($250 per month), which offer early access to the most advanced features and best-performing models. For now, these plans are reserved for US users, but they clearly indicate that AI will become an increasingly customizable — and paid — component of the everyday digital experience.
Some more creativity-oriented technologies, such as Veo, Imagen 4, and Flow, were also presented with a consumer focus: tools for generating videos, images, or multi-format content from simple commands. Although their use is immediately appealing to creators and professionals, it is clear that Google aims to make them available to a wider audience, with simplified interfaces and everyday use cases.
Together, these innovations paint a coherent picture: a smarter personal assistant, closer to the user, capable of intervening at any moment of the digital experience — and adapting to each individual’s needs. This is Google’s vision for the next technological leap: no longer tools to be opened and used consciously, but integrated, present functions capable of intervening at every stage of the digital experience and adapting to the context in a continuous manner.
What changes after Google I/O 2025: the questions everyone is starting to ask
The announcement of AI Mode and the new conversational features has generated some enthusiasm, but more than anything else, it has raised a wave of questions about content visibility, traffic distribution, and the very sustainability of an open web where content published online generates value—and revenue—for those who produce it.
The first and most immediate concern is about websites: if Google and AI respond directly, what happens to clicks? AI Mode displays answers generated by the Gemini model, summarizing information indexed by Google. Links to sources remain, of course, but they are less central, less obvious, and more “optional” for the user. This could profoundly change the dynamics of organic traffic — and that’s no small detail.
The second question follows hot on its heels: how will visibility be achieved in this new context? Google has stated that AI Mode values a variety of reliable and well-structured sources, but has not yet clarified the criteria that determine the presence—and position—of content in the generated responses. We are no longer just talking about ranking: we are talking about “inclusion” in a new response interface.
Then there is the crucial issue that concerns editors, publishers, content creators, and those who monetize advertising: who will drive traffic? And how will the open web be sustained if the first—and perhaps only—answer comes from AI? Google has always stated that it wants to support the ecosystem, but the AI Mode model is still under construction. We know that it is powered by the classic index, but we also know that the presentation of sources is very different from a traditional SERP—more guided, more concise, less navigable.
Visibility at risk, a scenario we have already seen
This feeling of uncertainty is not new: every time Google has introduced something new in Search, there has been talk of revolution, and often even of “the end of organic traffic.”
This happened with direct answers, with weather boxes integrated into SERPs, with featured snippets, with AMP, with the news carousel, with the “People also ask” boxes and, more recently, with AI Overview. Each time, there were fears of a collapse in visibility for websites, but each time, traffic continued to flow—somehow we survived!
If anything, what has changed is the way information is presented, clicked on, and valued. Those who have been able to adapt have continued to be present and visible.
The key factor was, and still is, the fact that Google has always needed other people’s content to function. Even today, despite AI, Google needs content: reliable sources, up-to-date pages, and verified experiences. Artificial intelligence can summarize, explain, and suggest—but without content, it has nothing to process. And Google, which bases much of its business on that content, cannot afford to cut out those who produce it.
What to do now (and why there’s no need to panic)
A lot has changed. More will change. But not everything, and not overnight. And that’s precisely why we should “keep calm.” The new features announced will not all be rolled out tomorrow morning, nor will they be available everywhere at the same time.
AI Mode, for example, is only active in the United States and in English, with a gradual rollout. New features related to Gemini, visual search, conversational and agentic experiences are still being tested, available in Labs or linked to AI Pro and Ultra subscriptions, and not immediately accessible globally.
In short: there is time. Time that we can use to understand what is really changing, which tools work best, how users are reacting, and how search behavior is evolving. And above all: to prepare ourselves.
In this context, analysis and monitoring tools become even more important. This is where SEOZoom comes in, which has been evolving for some time from a “SEO-only” platform to a useful tool for navigating a scenario dominated by artificial intelligence, supporting the observation of changes, the evaluation of real performance, and the updating of strategies in near real time.
The future of search will be more complex, more fluid, and more mediated by intelligent systems. But it will still be readable, analyzable, and optimizable — for those who have the right tools to do so.
[cover image: screenshot from the YouTube video “I/O ’25 in under 10 minutes”]