Last-Click Attribution Is a Lie You're Telling Your CFO
- Heidi Schwende

- 2 days ago
- 9 min read

Google just released research that confirms what most mid-market companies are getting wrong about paid media measurement. The answer isn't more data. It's better causality.
There's a conversation that happens in boardrooms across North America every quarter. Marketing presents a stack of dashboard screenshots. The CFO looks at the numbers, nods, and then quietly redirects budget somewhere else. Not because the marketing isn't working. Because no one can prove that it is.
That's the measurement problem in plain language.
Google published a research collection called The Science of Demand: Ads Measurement in the AI Era earlier this year, and it's one of the more honest pieces of content I've seen come out of a platform in a long time. It doesn't just sell you on their tools. It identifies the systemic failure in how most advertisers measure media, and it backs the argument with data.
Fair disclosure upfront: this is Google's research, published by Google, featuring case studies that validate Google's tools. I've read it with that lens in mind. The data points are cited from their internal studies and named third-party reports. I'll note the source throughout so you can calibrate accordingly. Where I think their conclusions hold up, I'll say so. Where there's nuance worth adding, I'll add it.
The Problem Isn't Your Budget. It's Your Visibility.
According to Google's framework, the core issue for most advertisers isn't how much they're spending. It's that they can't see which dollars are doing real work and which aren't.
They break it down cleanly. Every marketing dollar should do one of two things: convert existing demand right now, or build new demand for a future sale. Most dashboards only reward the first job. The second job goes completely unrecorded.
This is what Google's VP of buying, analytics, and measurement Gaurav Bhaya calls the "dark spend" era. And the data behind it is hard to argue with.
Here's what Google's internal research found across tens of thousands of advertisers:
Standard Google Ads campaigns capture 70% of conversions within the standard 30-day click and 3-day engaged-view window. (Google Internal Data, n=7,000 advertisers, July–December 2025)
Performance Max campaigns only capture 50% of their conversions in that same window. (Google Internal Data, n=5,000 advertisers, July–December 2025)
Demand Gen campaigns only capture 40%. (Google Internal Data, n=4,000 advertisers, July–December 2025)
That means if you're running a Demand Gen campaign and judging it on last-click within 30 days, you're calling it a failure when it's actually delivering. You're measuring the wrong thing and making budget decisions based on that.
The independent research makes it even starker. According to Fospha's 2025 Demand Gen and YouTube Playbook, relying on last-click attribution for YouTube and Demand Gen campaigns undervalues returns by an average of 14 times. Not 14%. Fourteen times. Google cites this in their research, and Fospha published it independently, so there are two sources pointing at the same structural problem.
Last-click attribution undervalues YouTube and Demand Gen returns by an average of 14 times. Not 14%. Fourteen times. — Fospha, Demand Gen and YouTube Playbook, 2025
Why This Keeps Happening
Short-term metrics feel like certainty. Last click is clean. It's comfortable. You can point to it in a slide deck.
But clean and comfortable is not the same as accurate.
Clean and comfortable is not the same as accurate.
In long purchase-cycle categories, which describes most mid-market B2B and higher-consideration B2C products, demand gets built over time. Someone sees your YouTube ad in January, searches your brand name in February, reads a blog in March, and converts in April. Under a 30-day last-click model, January and February don't exist. March takes all the credit. January's budget gets cut.
You've just killed the engine that generated the conversion.
If you only reward the final click, you end up killing the engine that generated the desire in the first place.
This isn't a new problem, but AI is accelerating it. Google's framework calls it the "visibility gap." You're not underspending. You're misallocating, and you don't know it because the tools most advertisers use aren't built to show the full picture.
And there's a layer Google's research doesn't cover at all: what happens before someone searches. As AI-generated answers increasingly shape discovery, your brand's presence in those responses is itself a form of demand creation that no paid attribution model currently measures. It's what we track through our AI Resonance Model™ — the discipline of building organic AI citation as a measurable business asset, separate from and upstream of the paid media funnel.
The Fix: A Three-Part Measurement Stack
Google's framework proposes what they call a "measurement trifecta." I want to walk through this practically, because it matters.
Attribution
Attribution tells you what happened at the channel level. It's your operational signal. It's imperfect, but it's fast. You use it for day-to-day optimization. The problem is that most companies stop here and treat it like the whole truth. It isn't.
The fix is to use data-driven attribution models instead of last-click, and to stop crediting only what's visible in a 30-day window when your product has a longer purchase cycle. In Google Analytics 4, you can run path exploration and path length reports to see the actual time lag between a user's first interaction and their final conversion. Do that audit before you make your next budget call.
Incrementality Testing
Incrementality testing asks the question attribution can't: did the ad cause the conversion, or would that person have converted anyway? That distinction matters enormously, especially for branded search. You might be spending significant budget to capture clicks from people who already decided to buy.
Google lowered the minimum cost for incrementality experiments from $100,000 down to $5,000 in 2025, according to The Science of Demand. That used to be enterprise-only. It's not anymore.
For mid-market companies, a practical starting point is a geo-based conversion lift test. You split your markets into a test group that sees the ads and a control group that doesn't, then compare conversion rates. It's not complicated to design. It is complicated to interpret correctly, which is why you need someone who knows how to set it up without contaminating the results.
Google cites Rocket Mortgage as a case study in this section. According to The Science of Demand, Rocket Mortgage ran incrementality testing on a demand generation campaign through Google Ads and discovered the campaign was generating 23% more value than their model had originally estimated. That finding recalibrated their entire marketing mix model, not just that one campaign. John Joba, head of marketing data science at Rocket Mortgage, is quoted describing the outcome as moving from "reactive justification to proactive planning" with finance. That's the goal. I'd note this is a Google-published case, so take the framing with appropriate context, but the mechanism it illustrates is sound.
"We moved from reactive justification to proactive planning. We have a common language with finance firmly grounded in business value first." — John Joba, Head of Marketing Data Science, Rocket Mortgage (via Google, The Science of Demand)
Marketing Mix Modeling
MMM is the strategic layer. It takes cross-channel data and finds correlations between your marketing investments and your actual business outcomes, including offline channels, seasonality, and external factors attribution can't account for.
Google's open-source MMM tool, Meridian, is worth taking seriously. It uses a Bayesian statistical framework, which means it updates its model as new data comes in and gets more accurate over time. It also integrates incrementality test results, so your experiments feed back into your planning model rather than living in a separate analysis silo.
Meridian requires at least two years of historical data to run effectively. If you don't have that data centralized and clean, you're not ready for MMM yet. That's a prerequisite, not a knock on your business.
Google cites Pandora (the jewelry brand) as an early Meridian adopter in The Science of Demand. Kristina Kaste, Pandora's media planning specialist, describes it as moving away from gut feeling toward KPI-driven decisions, with visibility into channel performance that they didn't have before. Again, this is Google's own publication, so the sourcing is self-referential.
What I can tell you independently is that Bayesian MMM is a legitimate advancement over traditional static models, and the open-source nature of Meridian means the methodology is auditable in a way that proprietary black-box models aren't. That part I'd evaluate on its own merits.
What You Actually Need to Fix First
Here's where I have to be direct, because I keep writing about this and keep seeing the same mistake. Foundations before AI. Every time.
None of this measurement infrastructure works if your foundational data signals are broken. If your tags are firing incorrectly, if you're relying on third-party data that browsers are blocking, if your CRM data and your ad platform data live in separate silos that never talk to each other, you're building precision measurement on a cracked foundation.
According to a Google and Kantar study of nearly 2,000 marketing decision-makers cited in The Science of Demand, organizations with a first-party data strategy in place to enable AI marketing tools are 1.5 times more likely to report stronger performance than competitors who don't have that strategy. The data connection comes before the AI strategy. Not after.
This is also where AI advisory work earns its keep. Knowing which tools to implement, in what order, and how to build organizational decision-making around AI outputs is not a technology problem. It's a leadership problem. The companies getting this right have someone helping them think it through before they spend.
Practically, this means:
Audit your tag infrastructure.
Google's tag gateway for advertisers routes measurement signals through your own website infrastructure rather than third-party scripts that browsers increasingly block. According to Google's internal performance data cited in the report, advertisers who configured it saw an 11% uplift in recovered signals. That's conversion data that was invisible before. If you're optimizing campaigns on incomplete data, your AI bidding is making decisions based on a partial picture.
Centralize your data sources.
Your CRM data, web behavior data, offline transaction data, and app data if applicable, need to live in one place and flow cleanly into your advertising platforms. The specific tool matters less than actually doing it.
Stop mixing conversion types without deduplication.
If you're counting the same conversion across multiple attribution models without reconciling them, you're inflating your reported ROAS and making budget decisions based on numbers that don't reflect reality.
The CFO Conversation That Changes Everything
Google's framework calls the goal "growth governance," managing marketing like an investment portfolio, with bets sized by intent, guided by evidence, and accountable to ROI. Getting there requires something most mid-market teams are still building: executive-level fluency in how AI measurement tools actually work, what they can prove, and where they fall short. Building that literacy is a core focus of our WSI AI CAMPUS program, which gives marketing leaders and their teams the working knowledge to govern AI-driven marketing with confidence rather than deference.
Elissa Lee, Google's senior director of measurement and optimization for global media, outlines nine steps their own marketing team took to align with finance in The Science of Demand. A few are worth pulling out directly.
Stop throwing reports over the fence. Marketing works in isolation for 90 days, produces a report, and hands it to finance as a done deal. That model breeds distrust. The better approach is co-authoring growth plans with finance from the start of the quarter.
Report incremental ROAS, not blended ROAS, even when the incremental number looks smaller. Blended ROAS looks impressive. Incremental ROAS tells the truth. Finance respects accuracy. Building that credibility is what gets you a seat at the capital allocation table, not a bigger deck.
Run pilots before asking for full budget commitment. Agree on a focused test, define success criteria upfront, and let the data make the case for scaling. It's slower. It's also how you get to an annual plan where finance is co-investing in growth rather than policing your spend.
The agency section of Google's report includes a stat I want to flag because it's relevant to how you evaluate your partners: agencies are 35% more advanced than direct advertisers across key marketing measurement use cases, according to Accelerate with Google research published in February 2026. That gap exists because good agencies run measurement infrastructure across multiple accounts and verticals simultaneously. If your agency can't explain incrementality testing to your CFO in plain language, that's worth asking about.
What to Do This Quarter
You don't have to overhaul everything at once. But there are three things worth doing now.
Audit your conversion windows.
Pull the path length report in GA4 and look at the actual time lag between first touch and conversion for your top campaigns. If you're on a 30-day window and your conversion lag is 60 days, you're systematically undercounting results and potentially killing the channels doing the most upstream work.
Run one incrementality test.
Pick your highest-spend channel. Design a geo lift test. Find out whether you're capturing genuinely incremental conversions or taking credit for people who were going to buy anyway. That insight alone is worth more than any dashboard refresh.
Fix your tag infrastructure before you scale.
If you're planning to increase paid media budget, do the data foundation work first. More spend on a broken measurement system gives you more confident wrong answers.
The measurement era we're entering rewards advertisers who get rigorous about causality, not the ones who produce the most impressive looking reports. Your CFO doesn't need more slides. They need a system they can trust.
Your CFO doesn't need more slides. They need a system they can trust.
WSI helps mid-market companies build the data foundations, measurement frameworks, and paid media strategies that connect marketing investment to real business outcomes. We also work with marketing leaders through AI executive advisory and WSI AI CAMPUS to build the internal literacy that makes AI-driven marketing governable. If your current reporting can't survive a CFO meeting, let's talk.
Source:
Google, "The Science of Demand: Ads Measurement in the AI Era," 2025/2026.
Supporting data from Fospha's Demand Gen and YouTube Playbook (2025) and
Google/Kantar AI in Marketer Journey study (2024).




Comments