Incrementality Test Data Can Make Your MMM Less Accurate
Skip to content
December 22, 2025

Why using test data can make your MMM less accurate (and when it might help)

Marketing measurement has taken some interesting turns recently. With privacy changes impacting tracking abilities, many marketers are turning to marketing mix modeling (MMM). But there’s a twist—some companies suggest that test data, like incrementality studies or geo tests, can make an MMM more accurate. Recent research shows this isn’t always true. In fact, using test data to calibrate your MMM can actually make it less accurate in many cases.

Understanding how marketing really works

Before we dive into why test data can hurt your MMM’s accuracy, let’s establish some fundamental truths about how marketing works. Marketing isn’t a series of isolated actions—it’s an interconnected system where every piece impacts the others. Here are the key dynamics at play:

  1. Your top of funnel spend drives awareness that shows up everywhere, from bottom funnel conversions to organic traffic
  2. Without adequate top of funnel spend, your bottom funnel efforts won’t have enough volume to convert
  3. If you neglect bottom funnel marketing, your competitors will capture the demand you create
  4. Marketing effectiveness changes with seasons and events—your dollar might go further during Black Friday, but impressions cost more
  5. Marketing effects compound over time and need continuous monitoring
  6. Different businesses have different purchase consideration cycles, affecting how long it takes to see marketing impact
  7. Marketing efficiency doesn’t always diminish with spend—it can actually increase depending on factors like seasonality, creative quality, and spend levels
  8. Marketing effects decay over time, but not at a constant rate or in isolation
  9. There will always be external factors that impact your paid media performance

It’s critical that we establish as marketers and practitioners that serve them how the world of marketing works. Any technology in this space is only helpful if it works within how we all know marketing functions in the real world. And establishing this common understanding helps us evaluate marketing tools and technologies for their true real-world applications.

The problem with test data and MMMs

Research shows a fundamental mismatch between how test data works and how marketing actually operates. Tests typically look at singular moments in time and isolated variables, while marketing is an ongoing, interconnected system. This creates several problems:

Time horizon issues:

  • Tests usually run for weeks or months
  • Marketing effects can take much longer to materialize
  • Tests miss compound effects that build over time

Scope limitations:

  • Tests isolate variables that naturally interact in marketing
  • Seasonal patterns and external factors get ignored
  • Cross-channel effects are missed entirely

When we force an MMM to align with test data, we’re essentially asking it to ignore the complex reality of marketing in favor of simplified, point-in-time measurements. The research bears this out—models calibrated with test data showed:

  • 2-3x higher error rates
  • Lower model fit scores
  • Reduced accuracy in revenue predictions

That’s not to say that test data is always harmful and that incrementality and geo tests aren’t worth running. But the limitations of these tests need to be understood in order to frame their results in the most realistic, helpful way possible for brands.

How our models make this more complex

We’re really proud of our marketing mix modeling here at Prescient. If this isn’t the first blog article you’re reading, you probably know that already. Our Data Science team has spent years researching our models and building them from scratch because we weren’t satisfied with open-sourced models and their ability to reflect the realities of marketing.

We’ve already calculated the effect of marketing for each channel in our MMM. Our MMM learns values for your unique brand from your historical data. To do something like swapping out the learned value from the MMM for one from the incrementality test is a bit like swapping parts from cars with different makes. Sure, the parts may have the same name, but car parts can be highly interconnected and incompatible with one another. 

Essentially, swapping out values is not as straightforward as it might seem.

When test data might actually help

While test data can often reduce MMM accuracy, there are situations in which it might increase accuracy. The key is understanding when and how to use it. Test data might be beneficial when:

  • The test period captures a full seasonal cycle
  • The test design accounts for cross-channel effects
  • The data comes from multiple tests across different times
  • The test results are used for validation rather than calibration

Thinking of test data as universally harmful to your MMM is just as ill advised as considering it universally helpful. And it’s this nuance that we see missing from marketing conversations. But even if your team is at the point where they know they need to evaluate whether your test data is a tool that’s right for your goals, how do they go about doing that? We have some suggestions.

Making the right decision for your brand

Deciding whether to use test data in your MMM requires careful consideration of your specific situation. Prescient allows our clients to run multiple models, seeing how an MMM calibrated on test data stacks up against one calibrated without in terms of accuracy—because we want you to decide based on a trial run, not speculation. You invested in these tests. That doesn’t mean they’ll always help your MMM accuracy, but it does mean it’s worth testing so you can get the most value from these investments.

For most of you, we think the accuracy scores will tell you a lot. And we’ll make a suggestion about what we see as the best model for your brand, but the ultimate decision is in your hands.

A better approach to marketing measurement

The most accurate marketing measurement reflects marketing reality. That means understanding and modeling:

  • The interconnected nature of channels
  • Varying efficiency patterns across seasons
  • Complex relationships between funnel stages
  • Real-world factors that impact performance

This is why we built our measurement technology to account for these realities. We don’t take a stance on whether you should use test data—instead, we let you compare models with and without it to see which performs better for your specific situation. Because at the end of the day, what matters most is having measurement you can trust to reflect your marketing ecosystem accurately.

A framework for evaluating test data

The decision to use test data isn’t one-size-fits-all. Every brand’s marketing ecosystem is unique, which means you need a structured way to evaluate whether test data will help or hurt your measurement accuracy. When evaluating your test data, start by examining these key areas:

Marketing environment alignment:

  • Does your test data account for current market conditions?
  • Were tests conducted during representative time periods?
  • Did external factors potentially impact the results?

Data quality indicators:

  • Is the data recent enough to be relevant?
  • Were the tests run long enough to capture true impact?
  • Do you have enough data points to draw reliable conclusions?

Implementation readiness:

  • Do you have the resources to properly integrate the data?
  • Can you monitor the impact on model accuracy?
  • Are you prepared to reverse course if accuracy decreases?

Wrapping it up

Marketing measurement is complex because marketing itself is complex. While test data might seem like a straightforward way to validate or improve your MMM, the reality is more nuanced. Test data can either enhance or diminish your model’s accuracy depending on numerous factors—and you won’t know which until you try both approaches.

That’s why we believe in empowering marketers with choices. Our platform lets you compare models with and without test data, examining accuracy scores for each approach. This way, you can make an informed decision based on what actually works best for your brand, not what conventional wisdom suggests should work. (You can read more about this in our announcement about Validation Layer.)

Ultimately, the goal isn’t to use or not use test data—it’s to have the most accurate possible representation of your marketing ecosystem. Sometimes that means incorporating test data, sometimes it means leaving it out. What matters is having the flexibility to choose and the metrics to know you’ve made the right choice.

You may also like:

Take your budget further.

Speak with us today