top of page
  • Linkedin

What's Actually New in Marketing Measurement


I've written about broken attribution before. More than once. If you've been following along, you know my position: last-click measurement was always a simplification, AI amplifies whatever data infrastructure you feed it, and most mid-market businesses are optimizing against a distorted picture of reality before a single bid is placed.


I'm not going to repeat all of that here. What I'm going to do is tell you what's actually changed recently, because Google just published a research collection that puts hard numbers on problems I've been describing in qualitative terms, and a few of those numbers stopped me mid-read.


The Framework Underneath the Problem


Before getting into the numbers, it helps to have a clear mental model of what marketing is actually supposed to do. Google's research organizes it into three jobs, and it maps cleanly onto how I think about demand through the AI Resonance Model.


  • The first job is to create demand:

  • sparking intent before a buyer even knows what they're looking for.


  • The second is to capture demand:

  • showing up precisely at the moment a decision is being made.


  • The third is to convert demand:

  • turning attention into action and using those insights to understand where future value will come from.


The measurement problem is that most businesses only have reliable visibility into the third job. Conversion tracking is mature. Demand creation measurement isn't. Which means the first two jobs are chronically underfunded because the tools for proving their value haven't kept pace with the tools for proving conversion value. Everything that follows is a consequence of that gap.


The Number I Could Not Ignore


Google's internal data, pulled from thousands of advertisers in the second half of 2025, found that standard 30-day attribution windows capture only 40 percent of conversions from Demand Gen campaigns and 50 percent from Performance Max campaigns.¹


I want to be precise about what that means. The campaigns are working. The conversions are happening. They're just happening outside the window you're using to judge whether the campaign worked. So you defund it. You reallocate budget to whatever the dashboard says is performing. And you systematically starve the part of your marketing that builds future demand.


Fospha's 2025 research put a multiplier on this specific problem: relying on last-click attribution for YouTube and Demand Gen campaigns undervalues returns by an average of 14 times.²


That's not a measurement nuance. That's a capital allocation catastrophe playing out quietly in budgets right now.


When you evaluate demand-generating campaigns using short-term metrics, more than half the value being created is uncounted. It's invisible. -- Harikesh Nair, Senior Director of Data Science and Engineering, Google

The Data Problem That Sits Below All of This


None of the measurement improvements in this post matter if your signal quality is degraded to begin with. I've covered data infrastructure at length before, but two numbers from Google's recent research are specific enough to be worth citing directly.


Organizations with a first-party data strategy in place to enable AI marketing tools are 1.5 times more likely to outperform competitors who lack one.³ And advertisers who configured Google's tag gateway, which routes conversion data through your own domain rather than a third-party script, saw an 11 percent uplift in measurable signals.⁴


The practical implication is that a meaningful share of the performance gap between you and a better-measured competitor isn't a campaign problem. It's a plumbing problem. Your AI is optimizing against incomplete data, and no amount of creative or budget adjustments will compensate for that at the foundation level.


What's Changed for Mid-Market Businesses Specifically


I've referenced incrementality testing before as the right way to establish causal proof of marketing performance. What I didn't have reason to highlight until now is that it was effectively inaccessible to most mid-market businesses. Running a single incrementality experiment used to cost upward of $100,000. That's an enterprise tool at an enterprise price.


In 2025, Google reduced the minimum cost of an incrementality experiment to $5,000.⁵


That's a meaningful shift. Causal measurement, the kind that lets you walk into a CFO meeting and say "our campaigns caused this outcome, not just correlated with it," is no longer reserved for brands with nine-figure media budgets. If you're a $10 million to $50 million revenue business and you're not running at least one incrementality experiment per year, you no longer have cost as an excuse.


The test-and-control framework works like this: you identify a group exposed to your ads and a comparable group that wasn't, then measure the difference in outcomes. What converted because of the ad versus what would have converted anyway. That distinction is what separates real performance data from the self-congratulatory reporting that most dashboards produce by default.


The "Breadcrumbs" Framework Worth Stealing


Google's research team has been working on something they call leading user actions, and the concept is worth understanding regardless of whether you use their specific tools.

The problem with long-cycle demand generation is that the path from ad exposure to purchase can span months. A buyer sees a video ad in October. They search the brand name in November. They visit the website twice in December. They convert in January. Under a 30-day window, the October campaign gets zero credit. Under last-click, the January branded search gets all of it.


Leading user actions are verifiable intermediate steps that confirm a buyer is actually moving along the purchase journey as a result of ad exposure. Things like:


  • Shifting from a category search ("best accounting software") to a branded search ("Xero pricing")

  • Subscribing to or engaging with an advertiser's YouTube channel after seeing a pre-roll ad

  • Adding a product to a cart or starting a trial signup


These aren't conversions. They're observable mile markers that confirm the campaign is doing real work even when the final sale is months away. The practical application for your business is to start tracking these intermediate signals deliberately, because they're the evidence base you need to defend demand-creation investment when finance asks why a campaign should continue running.


Measurement compounds like interest. The earlier you build, the more resilient you'll be. Stop treating measurement like a report card. It's your steering wheel. -- Gaurav Bhaya, VP and GM of Buying, Analytics and Measurement, Google

The CFO Problem Is a Measurement Problem


I wrote earlier this year about the tension between marketing and finance, and the way broken attribution feeds that tension. Google's research frames the solution as "growth governance," which is a useful term for something I want to describe practically.


The goal is to get finance involved in designing the measurement approach, not just receiving the results. That means:


  • Agreeing on what counts as proof before the campaign runs, not after

  • Using shared tools that neither side can argue with in a post-mortem

  • Reporting incremental and marginal returns even when they look smaller than blended ROAS numbers, because accuracy builds the trust that vanity metrics eventually erode


Google's own internal marketing team describes this shift as moving from "throwing reports over the fence" to co-authoring growth plans with finance. The distinction matters. When finance helps design the measurement framework, they're invested in what it finds. When they only receive the output, they audit it.


Google's Meridian, their open-source marketing mix model, is the tool most relevant to this conversation. It runs on a Bayesian framework, meaning it gets more accurate as you feed it more data, requires at least two years of historical data to run properly, and can integrate incrementality experiment results directly so your causal evidence and your modeled evidence are calibrating each other. Pandora used it to move planning away from intuition and toward cross-channel allocation decisions grounded in actual data.⁶


You don't need a data science team to use it. You need an agency partner who does.


What to Do Before You Revisit This Topic Again


If you've been reading my content for a while, none of the underlying problems here are new to you. The short-termism, the attribution gaps, the AI amplification of bad data, I've covered all of it. What's new is that the tools to fix it are now accessible to businesses at your revenue level, the data proving the cost of inaction is more specific than it's ever been, and the CFO alignment piece has a practical framework behind it rather than just a concept.


Three things worth doing in the next 90 days:


  • Audit your time-to-conversion reality. Use GA4 path exploration to find out how long your actual buyers take from first touch to conversion. If you don't know this number, you can't set rational attribution windows.

  • Run one incrementality experiment. The minimum is $5,000. The cost of continuing to optimize against the wrong signal is much higher.

  • Have the attribution conversation with your finance team before your next budget cycle. Not after. Bring them into the methodology, not just the results.


The measurement gap is closable. It requires discipline more than budget, and a willingness to report numbers that look smaller in the short term so they can be trusted in the long term.


Here's the question worth sitting with: if your best-performing demand campaign is only showing you half its results, what does your actual pipeline look like? Let's measure what you've been missing.


Sources:

1 Google Internal Data, Global, July 30 - December 31, 2025.

2 Fospha, Demand Gen and YouTube Playbook, 2025.

3,4 Google/Kantar, AI in Marketer Journey, Global, April 2024; Google Data, Global, April 2025.

5 Google Ads product announcement, 2025.

6 Pandora case study via Google, 2025/2026.

Comments


bottom of page