top of page
  • Linkedin

Your Prospects Are Researching You With AI Right Now


What Are They Finding?


Here's something nobody in digital marketing wants to say out loud: the question is no longer whether AI has changed how your customers find you. That ship has sailed. The question is whether your business shows up in those AI conversations at all, and if it does, whether it shows up accurately.


Right now, someone is asking ChatGPT, Perplexity, or Google's AI about a problem your business solves. AI is giving them an answer. The answer either includes you or it doesn't. It either represents you accurately or it doesn't. And unlike a search result, you can't just watch the impression data roll in and call it awareness.


This is a different problem than most businesses realize. And it requires a different response than most marketers are giving it.


The Scale of What's Changed


Start with the numbers, because they're hard to dismiss:


  • ChatGPT now has over 900 million weekly active users globally.

  • Google's AI Overviews, launched just over a year ago, have already reached 2 billion monthly users across more than 200 countries — that's straight from Alphabet's CEO.

  • Perplexity processed 780 million queries in a single month in mid-2025, a platform that barely registered on most people's radar two years ago.


These are not niche tools anymore. They're where a significant and growing portion of your prospects go to research problems, evaluate vendors, and form opinions before they ever visit a website or make a call.


The search landscape your marketing strategy was built around no longer reflects how people actually search.


The Part Your Analytics Aren't Showing You


Pull up your Google Search Console right now and look at impressions versus clicks over the last 18 months. For most mid-market businesses, impressions are holding steady or even growing. Clicks tell a different story.


That gap is not a coincidence. It's AI.


  • A Seer Interactive study analyzing 3,119 informational queries across 42 organizations found that organic click-through rates dropped 61% when Google AI Overviews were present — falling from 1.76% to 0.61%.

  • Paid search fared even worse, with CTR dropping 68% on the same queries.

  • A separate Pew Research study found that when an AI summary appears, overall click-through rates drop from 15% to 8%, and

  • 26% of users end their search session entirely after seeing the AI answer.

  • Meanwhile, research from Bain found that 60% of searches now end without a click at all.


Here's what makes this different from previous shifts in search behavior: this isn't just about losing blog traffic. It's about losing the research phase of your sales cycle. The person who used to click through to three or four different sites, compare options, read your about page, and then decide who to contact is now getting a synthesized answer in 30 seconds. Whether your business is mentioned in that answer, and how it's framed, directly affects whether you're on their shortlist.


That's a revenue conversation, not a traffic conversation.


There is one meaningful bright spot in the data: brands that are cited in AI Overviews see 35% more organic clicks and 91% more paid clicks compared to brands on the same page that aren't cited. Being in the AI answer isn't just good for visibility. It actively drives more traffic than traditional ranking alone.


How AI Decides What to Say About Your Industry


AI models don't operate like search engines. They're not ranking your page based on a set of signals and surfacing it in a list. They're generating a response using patterns learned from enormous volumes of content, then attributing pieces of that response to sources they deem credible.


To simplify it: AI is looking for content that is authoritative, clearly structured, specific, and corroborated by other credible sources. Research shows brands are 6.5 times more likely to be cited through third-party sources than through their own domains. That's a significant strategic implication that most businesses haven't acted on yet.


What AI struggles with is generic content — the kind that covers a topic broadly without offering anything a reader couldn't find on a dozen other sites. If your blog has spent years producing articles designed primarily to rank for keywords, without genuine depth or specificity, AI has limited reason to treat your business as an authoritative voice on anything.


This is where a lot of businesses are getting hurt right now without understanding why. Their traditional SEO strategy produced content that performed reasonably well in a keyword-ranking environment. That same content is nearly invisible in an AI-answer environment, because it doesn't offer anything distinctive for AI to cite.


The businesses that AI cites regularly share a few specific characteristics.


They publish content that contains information AI can't easily find elsewhere. That means original data, specific client outcomes, documented methodologies, and firsthand expertise on niche problems. AI is very good at summarizing common knowledge. It defaults to sources that offer something beyond that when the query demands more specificity.


They appear consistently across credible external sources. Industry publications, trade associations, earned media placements, expert roundups. AI models weight third-party corroboration heavily because it signals that a source is recognized as authoritative beyond its own website.


Their content is structured so AI can extract clear answers from it. This isn't about writing for robots. It's about writing with enough discipline and clarity that a machine can identify what question you're answering and what your position is. FAQ sections, clear subheadings, direct declarative statements. The same things that make content easy for a busy executive to skim make it easy for AI to parse.


The Part About Google That Most Guides Won't Tell You


There's a thing happening right now in the advice ecosystem around AI search optimization that deserves some healthy skepticism.


A lot of the guidance is tactical in a way that could backfire.


  • Chunk your content for LLMs.

  • Optimize every page for AI extraction.

  • Structure everything around conversational queries.


Google's own team has pushed back on this, and they're right to. Systems improve. What games a current version of AI search may work against you when the next update arrives. Google has consistently penalized content optimized primarily for machines rather than humans, and there's no reason to believe that pattern will change with AI.


The better framework is simpler:


  • build genuine authority in your space

  • produce content that reflects real expertise

  • get recognized by credible external sources, and

  • make sure your website gives AI the structural signals it needs to understand what you do and why you're credible.


That's not a new idea dressed in new language. It's the same principle that has driven durable search visibility for 20 years, applied to a new set of tools.


What has changed is the urgency. The businesses that establish authority in AI-generated answers now will be significantly harder to displace than the ones who wait until this is mainstream knowledge. And it's moving faster than most people expect — AI Overviews grew from appearing on 6.5% of searches in January 2025 to nearly 25% by July 2025 before pulling back to around 16% by year end. That kind of volatility means the window for being an early mover is shorter than it looks.


What "Being Visible in AI Search" Actually Requires


Let's be concrete about the work involved, because vague advice doesn't help anyone decide where to spend time and budget.


Your content needs to be worth citing. 


This means an honest assessment of what you currently publish. If it's thin, generic, or primarily keyword-driven, it needs to be rebuilt with genuine depth. Not more content. Better content. The businesses winning in AI search are not publishing more than their competitors. They're publishing content with more substance per piece.


Your brand needs to exist in the places AI looks for corroboration. 


That means PR, industry media, association involvement, and expert positioning in publications your prospects already respect. A strong website alone is not enough when AI is trying to determine whether your business is actually recognized as credible by sources other than yourself. The 6.5x citation advantage through third-party sources is not a minor factor — it's the single biggest lever most businesses haven't pulled.


Your technical infrastructure needs to be sound. 


This is where most businesses think they've done enough when they've really only started. Data from SE Ranking shows that fast-loading pages under 0.4 seconds average 6.7 ChatGPT citations, while slower pages drop to 2.1. Site performance is now a citation factor. But the deeper technical conversation — the one that most businesses aren't having yet — is about schema markup.


You need a way to measure what AI is actually saying about you. 


This is the piece most businesses are skipping, partly because it's less established and partly because standard analytics don't capture it. But if you're not regularly testing how AI tools respond to queries in your category, you're operating blind. You don't know if you're being cited, how you're being described, or whether your competitors are being positioned favorably relative to you.


As of mid-2025, AI Mode clicks count toward Search Console totals, but you still can't filter them separately — which means manual testing across ChatGPT, Perplexity, and Google's AI is the only way to see the full picture.


Schema Markup: It's Not What It Was, and That's the Point


Most people still think of schema markup the way they thought about it five years ago. A line of code you add to a product page to get star ratings in search results. A box to check during an SEO audit. Something the technical team handles and then forgets about.


That version of schema is gone. What has replaced it is considerably more strategic, and the businesses that haven't caught up are leaving a meaningful competitive gap wide open.


The original job was cosmetic. The new job is foundational.


Schema markup was created in 2011 as a shared vocabulary developed by Google, Microsoft, Yahoo, and Yandex to help search engines understand web content more efficiently. For years, its primary value was visual: rich results, star ratings, event listings, recipe cards. Useful, but optional. Nice-to-have.


In March 2025, both Google and Microsoft publicly confirmed they use schema markup for their generative AI features. Google was explicit: structured data is critical for modern search features because it is efficient, precise, and easy for machines to process. Then in May, ChatGPT confirmed it uses structured data to determine which products appear in its results.


That confirmation changed the conversation entirely. Schema stopped being an SEO tactic and became a requirement for making your organization intelligible to AI.


AI doesn't read your website the way a person does.


Search engines and AI systems don't experience websites the way humans do. They don't scroll, skim, or read for context the way a buyer does. They rely on structure to understand what information means and how it connects. Schema markup provides that structure by turning content into explicit data about entities and their relationships.


Without that structure, AI has to infer. And inference leads to errors, omissions, and in some cases, complete misrepresentation of what your business does, who it serves, and what makes it credible. A benchmark study by Data World found that LLMs grounded in knowledge graphs achieve 300% higher accuracy compared to those relying solely on unstructured data. That is not a marginal improvement. That is the difference between an AI that describes your business accurately and one that doesn't mention you at all — or worse, gets it wrong.


The vocabulary itself keeps expanding.


Schema org — the shared standard that defines what types of structured data exist and how to implement them — is not static. It grows as the ways people search and the ways AI processes information evolve.


In November 2025, Google announced it would deprecate support for seven structured data types starting January 2026 — including COVID-specific announcement markup, Dataset for general search, and Q&A schema. Some in the SEO community read this as Google pulling back from structured data. The reality is the opposite. Google is pruning the types that aren't doing meaningful work and concentrating support around the ones that matter most for AI understanding — Organization, Article, Product, LocalBusiness, FAQPage, and the entity relationship types that help AI understand who you are, what you know, and what you offer.


At the same time, new schema types are emerging to meet new AI behaviors. Markup to disclose AI-generated content is being developed as a credibility signal. As AI assistants begin handling complete transactions rather than just answering questions, new schema types for conversational commerce are in development. The vocabulary grows wherever AI capabilities expand, and if your implementation doesn't keep pace, you fall behind in the areas where AI is growing fastest.


The bigger shift: from rich results to knowledge graphs.


The most significant evolution in how schema is being used isn't about individual page types. It's about the relationship between pages — what's called a Content Knowledge Graph.


When schema markup data is connected across an entire site, it forms a machine-readable layer of how an organization's content, brand, and offerings relate to one another. The connections bring context, and the context brings understanding. AI doesn't just need to know what a single page says. It needs to understand what your business is, what expertise it holds, how your services relate to each other, and why you're a credible source on the topics you cover.


A site with disconnected schema on individual pages tells AI fragments. A site with a well-built knowledge graph tells AI a coherent story. The difference in how that site gets represented in AI-generated answers is significant.


A new protocol is emerging that takes this further.


Microsoft announced NLWeb at Build 2025 — an open-source project designed to turn any website into an AI-queryable application. The person who built NLWeb is R.V. Guha, the same engineer who created RSS, RDF, and Schema.org itself. That's not a coincidence.

NLWeb is purpose-built for websites that already use Schema.org markup, making that structured data immediately usable by AI systems. It connects your schema to large language models through the Model Context Protocol (MCP), allowing AI agents to query your website content in natural language rather than simply crawling it page by page.


To put it plainly: with NLWeb, an AI agent doesn't just read your website. It asks your website questions and gets structured answers back. TripAdvisor, Shopify, and Eventbrite have already deployed it.


The implication for mid-market businesses is that schema markup is no longer just about being understood by search engines today. It's about being queryable by AI agents tomorrow. Robust, entity-first schema implementation is no longer just a way to win a rich result. It is the foundational requirement for the next phase of AI-driven search.


There is also a crawling issue most businesses don't know about.


Here is something the standard "optimize for AI search" advice skips over entirely. Crawler data shows a clear divide: while Googlebot can fully render JavaScript, many AI crawlers — including GPTBot, ClaudeBot, and PerplexityBot — currently cannot.


These crawlers can only reliably interpret content and structured data included in the initial HTML response.


That means if your schema markup is rendered client-side through JavaScript — which is how many CMS platforms implement it by default — a significant portion of AI crawlers may never see it. You could have technically correct schema that AI is simply not reading. Fixing this requires either server-side rendering of structured data or edge-based delivery. It's a technical decision, and it has real consequences for AI visibility.


What this means practically.


Schema is no longer a one-time implementation. It's an ongoing discipline that requires attention to three things simultaneously: keeping pace with what Google and Microsoft are actively supporting, building entity relationships across your site rather than treating pages as isolated objects, and ensuring your implementation is accessible to the AI crawlers that matter.


Most businesses are operating with schema implementations that were set up years ago, validated once, and left alone. In the current environment, that's the equivalent of having a website that hasn't been touched since 2018 and wondering why it doesn't perform.


The businesses investing in this now are building infrastructure that compounds. Every new content piece connects to an existing entity web. Every AI model that crawls the site gets a more complete picture. Every query about your industry becomes slightly more likely to surface your business as a credible source. That's not a quick fix. It's a strategic asset — and it gets harder to replicate the longer a competitor has been building it.


What Happens If You Wait


This is the question worth sitting with.


The businesses that show up consistently in AI-generated answers are building a form of authority that compounds over time and becomes increasingly difficult for later movers to displace. The research is consistent on this: 92% of AI Overview citations come from domains already ranking in the top 10. Authority in traditional search and authority in AI search are not separate games. They reinforce each other. But the businesses building that authority now are pulling ahead while others are still debating whether this is real.


The analogy to early SEO is imperfect but instructive. The businesses that invested seriously in organic search in 2007 and 2008, before it was a mainstream priority, built authority that sustained them through multiple algorithm updates and competitive surges. The ones who waited until it was obvious ended up paying significantly more for the same results, if they ever caught up at all.


AI search is earlier in that curve than most people assume. The strategy gap between businesses actively building AI visibility and those ignoring it is widening every quarter.


Three Ways We Help Businesses Get This Right


We built WSI AI Campus because the training and strategy gap in this area is real, and most businesses can't close it internally with the information currently available.

There's no single right way to engage with this, so we built three paths.


Done for you. 


If your team's bandwidth is the constraint, or if you'd rather have experts handle AI search strategy while you focus on running your business, we take it on. Audit, strategy, content, technical implementation, ongoing measurement. You stay informed. We do the work.


We teach you to do it. 


If you want to build this capability inside your organization, AI Campus is structured training that takes your team from foundational AI literacy through advanced implementation. This isn't a seminar with a workbook. It's hands-on, live, and built around real business applications. When your team finishes, they can execute independently.


We work alongside you. 


The hybrid model is what a lot of businesses choose. Your team handles execution. We provide strategy, oversight, and the expertise to course-correct when the landscape shifts. You build internal capability without starting from scratch or going it alone.


Any of these paths starts with the same thing: a clear picture of where your business currently stands in AI search, what your competitors' visibility looks like, and what the gap actually costs you in terms of revenue opportunity.


If you want to know where you stand, that's the right conversation to start with.

So book a conversation with our team.


Sources:

Backlinko / Semrush, ChatGPT Statistics, December 2025

Alphabet CEO Sundar Pichai, Q3 2025 Earnings Call, via CNBC, August 2025

Perplexity AI CEO Aravind Srinivas, Bloomberg Tech Summit, May 2025

Seer Interactive, AIO Impact on Google CTR: September 2025 Update, November 2025

Pew Research Center, Do People Click on Links in Google AI Summaries, July 2025

Bain & Company, New Front Door to the Internet, February 2025

position.digital, AI SEO Statistics, updated February 2026

Semrush, AI Overviews Study: What 2025 SEO Data Tells Us About Google's Search Shift, December 2025

SE Ranking, Page Speed and ChatGPT Citation Analysis, November 2025

Schema App, What 2025 Revealed About AI Search and the Future of Schema Markup, January 2026

Google Search Central, Structured Data Documentation, March 2025

Microsoft Bing, Fabrice Canel presentation at SMX Munich, March 2025

OpenAI, ChatGPT product schema confirmation, May 2025

Data World, LLM Knowledge Graph Accuracy Benchmark Study

Google Search Central, Structured Data Deprecation Announcement, November 2025

Microsoft Build 2025, NLWeb Announcement, May 2025

Schema App / Vercel, AI Crawler JavaScript Rendering Analysis, 2025

dataslayer.ai, AI Overview citation source analysis, 2025

Comments


bottom of page