Beyond the click: being the choice between AI, new SERPs, and Bing
What if I told you that the metrics you’ve always based your decisions on—average position, CTR, clicks from the top position—are losing their meaning?
Search has expanded, spreading across different, intelligent surfaces: summaries from AI Overview (and soon AI Mode) shift the focus and occupy the top of the page, conversational chats with Large Language Models compress the search journey, and Bing results feed Microsoft’s artificial intelligence (and more). And then there are product widgets that intercept immediate needs and ChatGPT which, when it quotes, sends traffic that is already ready to convert, with fluctuations linked to how it weights sources.
In short, being first on Google is not enough and is useless if the user chooses elsewhere. If you want to remain visible, you need to move from positioning to choice: monitor the points where the user decides, obtain citations from AI engines, strengthen the brand’s entities, and read the signals that really matter. In these pages, you will find a practical method for realigning KPIs and content, using Google and Bing in a complementary way, and transforming exposure into demand and sales.
What is “beyond the click” in digital marketing 2025
For years, our obsession has been to climb the SERP to capture the largest share of clicks, focused almost entirely on the top three positions.
Today we are disoriented. The massive introduction of artificial intelligence into search engines has fragmented the user journey, forcing us to recalibrate our strategies, to stop thinking about a single point of contact and to start thinking in terms of “presence” and “preference” on a much larger playing field.
“Beyond the click” is the transition from an exposure metric (the click, the impression) to a choice metric. The user reads a generative summary, consults product results, or forms an opinion within a conversational thread; sometimes they land on the site, sometimes they don’t, but the decision is made beforehand. To compete, then, you need to monitor the surfaces of choice and learn to read the signals with an analytics dashboard that combines visibility, citations, referral quality, and conversions.
The new trajectories of clicks (and why CTR alone is not enough)
This leads to a change in interpretation: not just “how much you appear and where,” but where you are preferred. For this reason, in addition to CTR, the presence/absence of Overview on your query basket, appearance in boxes, citations in LLMs, and yield per source (Google/Bing/LLM) are important, taking into account that AI referrals today are unbalanced on desktops.
These are the three main forces that are reshuffling the deck.
- Google and the hybrid SERP
The first concerns Google and the evolution of the results page. If you have an editorial site or blog with informative content, you have certainly noticed it: impressions are up, positions remain roughly stable, but clicks are down dramatically. This is a sign that CTR and visibility are becoming misaligned, and the main culprit is AI Overview. The data confirms this: a study by GrowthSRC Media, reported by Search Engine Journal, indicates that position 1 has fallen from 28% to 19% (-32%) and number 2 from 20.83% to 12.60% (-39%).
At the same time, however, a complementary phenomenon is emerging: users are no longer stopping. They read the summary provided by AI and then scroll down, skipping the first links and rewarding positions 6 to 10 with an increase in CTR of about 30%, in search of further information and different perspectives.
Immediately below the AI summary, product widgets (Shopping modules or “Popular products”) come into play, which in many categories open comparison interfaces directly in SERP.
Although Google continues to claim that “quality clicks” are stable year-on-year and that part of the reported declines are due to partial methodologies, what we see is that a significant portion of the choice is made without leaving Google—the now historic phenomenon of zero-click searches—and click-outs no longer reward those who are simply “positioned,” but those who are most useful, convenient, and convincing at the moment the user is making their decision.
- Conversational engines and choice within chat
The second force is the rise of conversational AI engines. Platforms such as ChatGPT and Perplexity, and Google’s own AI Mode, are creating an alternative discovery channel. In Italy, we have so far only seen the impact of AI Overview, but in the US, AI Mode is already being analyzed, a conversational experience that Google is extending to over 180 countries (in English for now), also adding “agent” functions (such as restaurant reservations with deep links to booking) in the US for Google AI Ultra subscribers via Labs.
In this mode, the user dialogues, refines their request, and obtains filtered action proposals (booking slots, options close to their preferences). The choice matures within the conversation, before any clicks are made. The value for your brand lies as much in the ability to be cited as an authoritative source as in the possibility of receiving a click.
AI engines provide a discovery channel which, although with lower volumes, generates more mature clicks and flows that are sensitive to ‘citation weighting’ experiments. Conversations with LLMs create a different corridor: those who ask, refine, receive citations and then click often arrive with a clear intention. The citation becomes the trigger for the visit.
- The Microsoft ecosystem: Bing and Copilot are your second front
Finally, there is Bing. With a market share on desktops that, according to StatCounter (August 2025), has consolidated around 11.4%, it is no longer a negligible channel, especially for B2B and professional traffic.
But its real strategic value is being the “web grounding” engine for the entire Microsoft Copilot ecosystem and third-party chatbots, including ChatGPT and Meta AI. Every time an AI agent integrated into Windows, Edge, or Office searches for up-to-date information on the web, it is using Bing. Being strong here increases the chances of appearing in assisted responses, creating a second, very important way to intercept demand.
KPI of choice: the key metrics to understand if and where you are winning
If clicks and average position are no longer enough to describe success, how do you measure it? In addition to numbers, you need a dashboard that reads where the decision is made and how the traffic that really matters behaves.
Put together visibility signals, citations, and traffic yield by source, with different readings for each device. The goal is simple: understand which surface (AI Mode, “classic” results, LLM referrals, Bing desktop) really drives demand and sales, so you can shift your budget and content with precision.
- AI Overview: how often they appear and how they redistribute clicks
The presence of this feature has exploded: in the panel analyzed by GrowthSRC, queries that activate Overview went from about 10,000 (August 2024) to about 172,000 (May 2025). This explains why top positions lose traction as the descent towards lower results and interactive forms increases.
The case of editorial sites makes this tangible: MailOnline reports that, with AI Overview, the #1 can fall below 5% CTR on desktop and around 7% on mobile; even when mentioned in the Overview, the CTR remains much lower than normal. At the same time, Google is intensifying the boxes that open Shopping interfaces in SERP, shifting part of the comparison without clicking outside its ecosystem.
The picture is consistent: more Overviews → more distributed attention → decisions that mature before clicking out.
The first question is how often your basket encounters a page with AI Overviews. If the incidence increases, it makes sense to look at two yield ranges (1–3 and 6–10) instead of a single position: when the second range holds steady while the Overview is present, you are seeing scrolling that moves the choice lower down.
Complete the reading with the presence of the most relevant features in your industry (popular products, price ranges, information sheets) and with what happens on the page: scroll depth, time to first action, elements touched above the fold. If these four readings tell the same story, you have a redistribution of attention that CTR alone does not explain.
- LLM & ChatGPT/Perplexity: adoption, referrals, conversions, volatility
Conversations bring out more explicit needs and longer prompts (on average, about 80 words in the US and 91 in the UK), and when the user leaves the thread, they take with them a more mature intention, and we tend to convert more on transactional traffic. In Similarweb data analyzed by Kevin Indig, transactional traffic from ChatGPT converts at 6.9% in the US compared to 5.4% for Google, while in the UK it is around 5.5%, in line with Google.
In recent weeks, however, volumes have fluctuated: between July and August, referrals from ChatGPT were down 52% because OpenAI reweighted its preferred sources towards “answer-first” content, with Reddit accounting for over 10% and Wikipedia around 13% of linked responses. In practice, concise, verifiable, and frictionless pages are given more space; “brochure” material or aggressive calls-to-action tend to be less frequently referenced.
There are three things to look at here. First, presence as a source: how much of the responses include your brand and how stable it is over 28 days. Second, post-click behavior: performance of the landing pages that appear in the responses (conversions and micro-actions consistent with the promise read in chat) with reading by device, knowing that over 90% of these submissions come from desktops. When your share of mentions moves in the same direction as post-click quality, you have a small but high-intensity channel; if presence declines while quality remains, you are observing a phase of rebalancing of sources on the topic, not a structural trend
- Bing, the desktop corridor: signs of share and revenue, and role in the AI ecosystem
Speaking of devices, we mentioned that Bing is gaining ground on desktops, with a share of about 29% in the US (Comscore) and 11.6% globally (StatCounter), while Google is down slightly in the same areas and stands at about 4% across all platforms.
The economic impact is tangible: Microsoft reports $1.6 billion in Search & News revenue in the last fiscal year (+13% YoY, +20% net of TAC) thanks to more searches and higher value per search; each share point is worth about $2 billion in advertising opportunities. Although Bing remains at around 4% worldwide across all devices, these signs tell you that for desktop/business audiences and in contexts where agents rely on its index, it counts for more than the overall average suggests.
Here, the metric is not “how much it is worth in absolute terms,” but how much it weighs in your mix where the desktop is central. Look at the share of sessions and conversions attributed to Bing, the yield differential on non-brand queries compared to Google, and the split by device. If you see that the desktop component supports quality better, that generic searches end with less friction, and that post-click latency is in line, you have a second corridor of demand that deserves space while Google’s SERPs reorganize.
The value that doesn’t come from clicks
Part of the user’s choice matures before they land on the site. You can intercept this by observing brand searches and direct access in the 24–72 hours following major exposure (highly visible overviews, recurring mentions in threads), and by weighing assisted conversions in paths where the first interaction was a “dense” view or an AI response.
There is no need to find “merits” at all costs: what is needed is temporal and semantic consistency between exposure and action. When these signals align with the three readings above, you have a solid basis for shifting budgets and priorities.
Your operational strategy: design the choice
If you want the user to choose you at new decision points, you need to orchestrate four interrelated plans: content designed for real intent, recognizable brand signals outside the site, sales pages capable of closing the comparison that begins in SERP, and an editorial offering that can also be “quoted” by generative systems.
Everything should be read with the same criteria you have set so far: where the choice is made, how traffic moves by source, and what role Google’s hybrid surfaces, conversational threads, and Bing’s desktop corridor play. Downstream, you need a data radar to understand where you are being chosen and where to shift your focus.
Relevant SEO: write for the intent that the user expresses
The goal of content is no longer just to “rank” for a keyword, but to be the most comprehensive, authoritative, expert, and well-structured resource on an entire topic, intercepting Google’s E-E-A-T framework.
Start with the intent and design content that exhausts it without referring elsewhere. The opening gives the answer the user expects, immediately followed by clarifications of variants and use cases, then you show limitations, contraindications, and verifiable examples. The sections should cover the questions that arise immediately after the first answer, because that is where algorithms and people decide whether to stay. Maintain consistency of entities between the website and official presences: brands, authors, products, and names must be legible and the same everywhere. Make key passages “reusable” with tables, short definitions, and relevant structured data, so you increase the likelihood of being cited when the Google page is enriched or when a generative response seeks a source—and, on a daily basis, you can easily repurpose your own content across multiple channels.
Brand signal: build recognition, not just pages
Relevance carries more weight if your name “comes back” in other contexts.
Publish well on your website, but also monitor the places where your audience really gets their information. A video showing the product in use, an editorial contribution in a credible publication, or useful participation in relevant communities are all signals that add up and make the brand more readable. Align author biographies, claims, and references, avoid naming overlaps, and take care of links and natural mentions: make key information about the brand easy to verify.
The sum of these elements builds familiarity and increases the likelihood of appearing when the Google page is dense or when a conversational engine selects sources.
E-commerce and retail: close the comparison that begins in SERP
If the results page already features tabs and carousels, your page must continue that comparison without friction. Align price and availability with the feed that powers Google, bring up a summary box with essential differences and selection criteria, immediately show useful variants and images, and explicitly state “who it is for/who it is not for” with concrete examples.
Categories work when they move from grids to operational guides for making choices, with relevant filters, comparative paragraphs, and transparent explanations of “who it is for” and “when it is convenient.” Product cards that make benefits, trade-offs, and social proof explicit reduce the distance between the shortlist created in SERP and the addition to the cart.
Remember: the user has gone past widgets and carousels, so they want quick confirmation. Perfect your copywriting by putting a summary at the top with three elements: a clear promise, verifiable distinctive points, and a primary CTA consistent with the intent. Integrate micro-components of social proof near the CTA (count of real reviews, snippets from a useful review, “best seller in category” badge). Reduce the time to choice with quick filters and smart preselections based on the most frequent behaviors; Test headlines, hero images, and microcopy on buttons; move elements that generate trust (return policy, warranty) to the area visible on mobile. In the tabs, use compact “Q&A” blocks fed by recurring support questions: they answer the same doubts that the panels at the top have raised and eliminate the last friction point before the shopping cart.
Editorial content designed to be cited
Organize thematic hubs that bring together the parent page and the most searched “question-answer” nodes. Each node should contain a short definition, an extended explanation, a section of examples, and a “terms and conditions” paragraph. Make sources and criteria visible, establish an update schedule for time-sensitive content, apply a round of fact-checking when regulations, prices, or evidence change, and ensure internal consistency to avoid cannibalization.
This way, users can immediately find what they need, and the systems that synthesize the information have clear blocks to cite.
And don’t overlook communities and sources that are “winners” for LLMs: Wikipedia and Reddit are growing because they respond with structure, verifiability, and social control, and you can leverage their signals.
On Wikipedia, work first on the encyclopedic nature of the topic and third-party sources; avoid promotional tones and clearly signal conflicts of interest when present. On Reddit, choose the right subreddits, respect policies and format, contribute with real guides, useful tables, reproducible examples, and declare your affiliation when necessary. The goal is to offer useful answers even outside your site. When you publish datasets, glossaries, or decision trees, make them easy to cite (license, version, date, source). AI favors what shortens the work of those who search and those who respond.
Keep an eye on Bing: a targeted approach
If your audience is often in front of a desktop, Bing is a resource you need to treat methodically. Start working on the basics that matter most here: IndexNow to speed up the discovery of updates, Bing Webmaster Tools to submit and check sitemaps/URLs, diagnose crawl issues, and read the queries that really drive traffic; take care of your brand SERP (well-structured official website, verifiable profiles, orderly reviews) and use clean structured data on products, how-to guides, and articles, because Bing tends to value them when building information boxes.
Combine the organic part with SEA, and therefore Microsoft Advertising when you have business audiences or “desktop” categories (software, professional services, B2B): import campaigns from Google judiciously, separate brand and non-brand, measure CPA/LTV, and keep an eye on desktop coverage. If the cost differential is favorable and quality remains high, you have a channel that supports demand while Google reorganizes.
Finally, remember the role of the system: monitoring Bing increases the likelihood of being picked up by agents who rely on its index. Keep entities clear and content “answer-first” here too: these are the same signals that help you stand out when the answer is composed outside the classic SERP.
SEOZoom as a radar: what you read, with precision
The new SEOZoom is the tool that tells you where you are chosen and how much that choice is worth.
- Zoom Authority is the indicator of the domain’s overall authority: use it to calibrate your ambition in the most contested topics and to decide where it really pays to invest.
- Keyword Analysis and SERP Analysis help you map the intents and the actual form of the results page on your topics, so you can design hubs and subpages without duplication.
- Backlink Analysis shows you which pages are gaining or losing links: these are candidates for a targeted refresh or to become more quotable “answer-first” pages.
- AI Rank gives you an estimate of visibility/traffic when your URL is selected in a Google AI Overview. It is the most direct measure of “above-the-fold” work.
What’s more, you can analyze the complexity of the SERP to understand where you compete, viewing the presence of AI Overviews, modules, and other elements for each of your keywords. Use the Editorial Assistant and AI Writer to create semantically rich content that is already designed to be “future-proof” and to dialogue with algorithms. Complete the picture with ADS Insight to analyze your competitors’ paid campaigns and gain a truly comprehensive view of their strategies.
With this architecture, you can maintain relevance, recognizability, continuity between SERP and page, and objective reading of signals. This is how you get chosen at the points where preference is born today.
Governance, testing, and the coming months: how to maintain your advantage
Now you need to move from observer to director. You understand where choices are made, so you need a method that puts measures in order, gives rhythm to experiments, and protects your work from external shocks.
It works if you start from a clean baseline, plan short sprints with clear goals, and keep an agenda that anticipates platform movements. There is only one direction: consistently read Google in hybrid mode, referrals from conversational engines, and Bing’s desktop corridor, knowing that demand is distributed and cannot be compressed into a single number.
- Short term: set the baseline and remove friction
In the first 2–4 weeks, establish quick priorities. Segment the basket into queries with and without AI Overview and read the yield in two bands (1–3 and 6–10). In Analytics, separate desktop and mobile, observe scroll depth, time to first action, and essential events on landing pages. Open a watchlist of topics where you intercept mentions in conversational engines and record weekly levels to avoid conclusions about isolated peaks. On Bing, build the starting point: share of sessions and conversions in the mix, differential on non-brand searches compared to Google, split by device. Lay the groundwork for content: clear response at the beginning, reusable blocks (definitions, tables), consistency of entities between the site and official profiles, updated feeds and structured data for the categories that appear in widgets.
- Medium term: controlled sprints and initial scaling
In the medium term (6–10 weeks), consolidate and scale what works and move from quality control to disciplined experimentation, working on three fronts. First, information hubs: make the main nodes “answer-first” with verifiable definitions, concrete examples, and sections covering the most frequently asked sub-questions; measure performance by source and consistency between SERP/chat promises and hero content.
Second, e-commerce/retail: transform categories into selection guides, highlight criteria and differences, align price and availability with what the user has just seen; observe what changes in CVR and post-click latency.
Third, Bing/Microsoft: timely SEO monitoring (IndexNow, Bing Webmaster Tools, clean structured data) and a split test on Microsoft Advertising in the more “desk-based” areas of your business; separate brand and non-brand, read CPA and LTV before scaling. Each test has an owner, an observation window, and a stop-loss decided at the outset: you are interested in the direction, not the lucky episode.
- Q4 and the beginning of the year: scale strong signals, prepare for platform changes
Enter the peak season with clear priorities and a structure that can handle peaks. Increase coverage where you see consistent quality per source, not chasing micro-shifts in ranking. Keep a close eye on topics exposed to AI Overview and themes where conversational engine citations show volatility; only move your budget when the trend is stable over four weeks. Keep an eye on the expansion of AI Mode in English and agentic functions that reduce transitions between search and action; continue to read Google and conversational engines for separate devices, because desktop bias remains decisive in AI referrals. On Bing, consolidate where the differential on non-brands is favorable and the audience is clearly from a fixed location. And monitor mobile closely: BrightEdge data shows that most mobile traffic to Google comes from iPhone—any moves by Apple/Safari on defaults could redistribute queries and referrals in a short time.
Risks and compliance: simple rules that save your work
Growth is sustainable if you respect rules and contexts, not least because every mistake is amplified and risks penalizing you.
The first critical point is behavior in communities. Reddit, vertical forums, and specialized groups are places where users seek authentic answers. If you show up with “convenient” accounts that promote your brand by pretending to be ordinary users, you are engaging in astroturfing—a practice of “fake word of mouth” that simulates grassroots consensus. It is quickly exposed by moderators and, today, also by LLMs that cross-reference reliability signals. The effect is not only reputational: citations disappear and subsequent conversations exclude you. A transparent, signed presence with useful and verifiable contributions works better, even when they do not bring immediate clicks.
The second risk concerns the reliability of information. Prices, procedures, regulatory and comparative requirements are “expiry” content. If they change and your page lags behind, that error doesn’t stop there: it is summarized in a widget, picked up by an AI response, cited by others. An easy way to protect yourself is to treat these assets as “live”: last update date on the page, linked primary sources, and an internal version log to know what has been changed and when. If you correct a value, remember that alignment must occur everywhere: text, structured data, product feeds. Users will compare what they saw in SERP with what they find on your site; repeated inconsistencies lead to a loss of trust and close the door to citations.
Then there is the issue of disclosures. If a page includes affiliations or sponsorships, the wording must be explicit and legible; emphasis is not needed, clarity is. The same applies to incentivized reviews: it is better to limit them and be clear, because it is in “answer-first” contexts that readers and systems look for signs of impartiality. Finally, experiments: the problem is not getting a test wrong, it is drawing premature conclusions. Too short an observation window, too many variables changed together, or seasonality not taken into account generate decisions based on noise. If a variation has not reached critical mass, it has neither “worked” nor “failed”: you simply do not have enough evidence.
Tools and ownership: how to organize your work so that data matters
A system works when everyone knows what to look at and why. Imagine the beginning of the month. You and your team start from a shared snapshot: queries with and without AI Overview, rendered by bands (1–3 and 6–10), page behavior for the landing pages that matter—and SEOZoom acts as a radar on all fronts.
From here, roles intertwine without overlapping. The strategy defines the hypothesis: which intent is worth investing in and to what depth. The editorial team builds or updates thematic hubs, avoiding internal duplication and making information blocks that people and systems readily cite (clear definitions, clean tables, procedural steps) reusable. The data team ensures that readings are comparable over time: it constantly separates desktop and mobile, notes context and anomalies, and keeps a 28-day window for referrals from conversational engines in order to distinguish fluctuations from trends. E-commerce (or whoever governs the catalog) aligns feeds and pages: prices, availability, variants, and reviews must match what the user has seen in SERPs. Paid advertising oversees Microsoft Advertising, where the audience is desktop-based, and reports not only CPA and volumes, but also differences in behavior between branded and non-branded, in order to inform more informed editorial choices.
This chain works when the handover is explicit. A typical example: analysis shows that on an information cluster, Overviews are frequent, high CTRs decline, but positions 6–10 hold up, and AI Rank indicates recurring selections “above the click.” The strategy formulates the hypothesis (“the user scrolls looking for different angles and evidence”), the editorial team adds comparative blocks and verifiable examples to the parent page and nodes, e-commerce verifies that related product sheets reflect the comparison criteria visible in SERP, paid tests non-brand queries close to that theme on Bing, and the data measures the effect over four weeks with the same lenses. It’s not a checklist: it’s a relay race in which each role knows what to look for and how to turn it into a piece of choice in your favor.