Why Incrementality Tests Can’t Tell You What to Do Next
Skip to content
blog header image with arrows
January 29, 2026

Why incrementality testing can’t tell you what to do next

Incrementality testing has become marketing’s supposed gold standard. Run a holdout test, compare your ad-exposed group to a control group, get clean experimental proof of what’s working. It sounds scientifically rigorous. It feels like real data-driven decision-making. And it’s completely useless for the question marketers actually need answered: what should I do with my budget next month?

Here’s the uncomfortable truth the incrementality testing evangelists don’t want to admit: these tests only tell you what happened during a specific window under specific conditions. They’re backward-looking snapshots, not forward-looking guidance. Your holdout test from July? It tells you nothing about December performance. Results during low-competition periods don’t predict high-competition windows. Tests at $10,000 monthly spend reveal nothing about what happens at $50,000. The test that shows your Facebook campaign drove 2x incremental lift last quarter cannot tell you whether to increase Facebook spend next quarter, how much to allocate relative to other channels, or whether that effectiveness will hold as you scale.

The point-in-time problem

Marketing effectiveness isn’t static. It changes constantly based on seasonality, competitive dynamics, creative freshness, and audience saturation. Your incrementality test captured one moment in time that won’t repeat. The conditions during your test—the competitive landscape, the audience you’d already reached, the creative that was running, the season you were in—all of these factors were specific to that window. Next month brings different conditions, and your test results tell you nothing about performance under those new circumstances.

This creates a fundamental mismatch between what incrementality testing measures and what marketers need to know. You don’t need to know what happened. You need to know what will happen if you make specific changes to your marketing mix. Should you shift budget from Facebook to YouTube? Scale up your prospecting campaigns? Pull back on retargeting? Incrementality tests can’t answer these questions because they only show you what occurred under one set of conditions that no longer exist.

The scaling problem

Perhaps the most dangerous limitation is that incrementality tests show what happened at your current spend level without revealing how effectiveness changes as you scale. This makes them actively misleading for optimization decisions. A campaign showing strong incremental lift at $10,000 might saturate quickly at $15,000 or might have substantial room to scale to $30,000 before hitting diminishing returns. The test tells you nothing about where saturation occurs or how marginal returns shift at different investment levels.

Marketing effectiveness follows curves, not straight lines. Saturation dynamics are far more complex than the industry typically assumes: some campaigns hit clear saturation points where additional spend delivers sharply diminishing returns, others maintain relatively stable efficiency across wide spend ranges, and some show multiple efficiency peaks at different investment levels. Understanding these response patterns is essential for smart budget allocation—but incrementality testing provides only a single data point on a complex curve. Making scaling decisions based on that single point is like trying to understand a movie by watching one frame.

The interaction problem

Marketing channels don’t operate in isolation, but incrementality tests measure them as if they do. Your Facebook holdout test isolates Facebook performance while ignoring how Facebook effectiveness changes when you’re simultaneously running YouTube awareness campaigns that warm up audiences. The lift you measured came from Facebook operating within your specific marketing mix at that moment; change the mix, and Facebook performance changes too.

These cross-channel interactions shift constantly. Upper-funnel awareness campaigns make lower-funnel conversion campaigns more effective. Brand-building efforts increase the efficiency of performance marketing. Channels create halo effects that show up in organic search, direct traffic, and branded conversions. Incrementality tests on individual channels miss these interactions entirely, leading to budget allocation decisions that optimize isolated parts while damaging the whole.

What marketers actually need

Let’s be clear about what strategic marketing decisions require:

  • You need to understand how effectiveness changes across spend levels: where saturation occurs and where efficiency opportunities exist. 
  • You need to predict what will happen if you reallocate budget between channels, not just measure what happened when you ran one specific configuration. 
  • You need to know how channels interact and support each other within your complete marketing mix. 
  • You need scenario forecasting that shows expected outcomes from different budget allocation strategies before you commit the investment.
  • You need forward-looking optimization guidance that accounts for how market conditions, seasonality, and competitive dynamics affect performance. 

What worked in July might not work in December. What performs well at current spend levels might not scale. What looks efficient in isolation might underperform when you account for channel interactions. Strategic decisions require understanding these dynamics, not just validating that something worked once under specific conditions.

Incrementality testing provides none of this. It’s “locally accurate but globally inaccurate,” correctly measuring what happened in one specific context but incapable of generalizing to guide future action. The experimental rigor that makes incrementality tests feel trustworthy is precisely what limits their strategic value. By isolating variables and controlling conditions, these tests remove the complexity that determines real marketing performance.

The better approach

Marketing mix modeling offers what incrementality testing cannot: forward-looking optimization based on continuous measurement that adapts as your marketing mix evolves. Instead of a single snapshot, MMM analyzes how effectiveness varies across conditions, spend levels, and time periods. This reveals saturation curves showing where to scale and where you’re overspending. It captures channel interactions and halo effects that isolated tests miss. It accounts for seasonality, competitive dynamics, and market changes that make point-in-time tests obsolete.

Most importantly, Prescient’s MMM provides scenario forecasting, the ability to model what will happen under different budget allocation strategies before you make changes. Should you shift $50,000 from Facebook to YouTube? MMM can forecast the expected impact. Want to know how much to invest during your peak season versus your slow periods? Scenario forecasting shows the tradeoffs. Need to understand whether scaling a campaign will maintain efficiency or hit saturation? The modeling reveals where those inflection points occur.

This doesn’t mean incrementality testing has no value. These tests can provide useful data points about what happened during specific periods. However, incrementality test results themselves need validation; they’re snapshots taken under particular conditions that may or may not represent broader patterns. Marketing mix modeling can help validate whether incrementality test findings align with longer-term performance patterns and whether those point-in-time results reflect sustainable dynamics or temporary anomalies. The key is recognizing that incrementality tests are inputs to understanding performance, not definitive answers about what to do next.

Making the shift

The industry needs to move beyond the incrementality testing orthodoxy. They fundamentally cannot answer the questions that drive strategic marketing decisions. By the time you design the test, run it for sufficient duration, analyze results, and present findings, market conditions have changed. You’re making decisions based on outdated information about a situation that no longer exists.

Strategic marketing requires continuous measurement that reveals how effectiveness changes across conditions and provides predictive guidance about future performance. It requires understanding saturation dynamics, channel interactions, and seasonal patterns. It requires scenario forecasting that shows expected outcomes from different allocation strategies. Point-in-time experiments, no matter how rigorously designed, cannot deliver what marketers actually need to optimize their budgets and drive growth.

The incrementality test that shows your campaign drove lift last quarter has told you something useful: that campaign worked under those conditions. But it hasn’t told you what to do with your budget next quarter. For that, you need measurement approaches designed for optimization, not just validation. Book a demo to see how Prescient’s MMM can offer you both.

You may also like:

Take your budget further.

Speak with us today