Brands using a regression-based MMM overspend by 81% during peak seasons
Prescient's model, built as a dynamical system that doesn't assume baseline and media can be separated, was just ~1% off the optimal allocation in the same test conditions.
Linnea Zielinski · 6 min read
Every year, brands pour their biggest budgets into the weeks surrounding Black Friday and Cyber Monday. It's the highest-stakes window in DTC marketing: ad costs spike, competition intensifies, and the decisions made in those few weeks can shape the trajectory of an entire quarter. Most brands treat peak season budget allocation as a data problem. Get enough signal, run your model, follow the recommendations. The math should handle the rest.
But what if the model itself breaks down precisely when you need it most?
Research from Prescient AI shows that brands relying on regression-based marketing mix models during peak periods are systematically directed to overspend by approximately 81% relative to the allocation that’s truly optimal. Understanding why this model failure happens is one of the most consequential things a growth marketer can do before Q4 planning season.
Key takeaways
- Regression-based MMMs assume that baseline demand and media effects can be cleanly separated, an assumption that breaks down when spend and organic demand are highly correlated, as they always are during peak seasons.
- When a regression-based model can't tell the difference between revenue your ads drove and revenue that would have happened anyway, it over-attributes that holiday lift to media, inflating apparent channel performance.
- That inflated attribution doesn't stay contained in your reporting layer; it feeds directly into your budget optimizer and produces recommendations built on a distorted view of what's actually working.
- In research conducted using a synthetic environment with known ground truth, a regression-based MMM recommended overspending by approximately 81% relative to the true optimal allocation during a peak period analogous to Black Friday and Cyber Monday.
- A Bayesian MMM, another widely used open-source approach, had the opposite problem and recommended underspending by approximately 11.5%, reflecting a different flaw.
- Prescient's model, built as a dynamical system that doesn't assume baseline and media can be separated, was just ~1% off the optimal allocation in the same test conditions.
- The gap between these models is both practical and academic; at peak season spend levels, an 81% overspend is a material and largely avoidable budget error.
Why peak seasons are uniquely hard for MMMs
Most of the year, a regression-based MMM can hold its own. Spend and organic demand move somewhat independently, and the model's core assumption—that you can separate what your ads drove from what would have happened anyway—is at least plausible. (A model built on the assumption that these can be separated is called a separable model.)
Peak seasons destroy that assumption. During Q4, brands deliberately increase spend when consumer demand is already rising. Budgets go up because purchase intent goes up. That means your ad spend and your baseline demand are moving in the same direction, at the same time, for the same reasons. In statistical terms, they're highly correlated. And that correlation is exactly what makes the attribution problem impossible for a separable model to solve cleanly.
When spend and demand are tangled together like this, the model faces a huge problem: there are many different ways to divide observed revenue between "what the ads did" and "what was already going to happen," and the data alone can't tell you which split is right. The model defaults to an answer based on its assumptions, and for regression-based MMMs, those assumptions point it in the wrong direction.
What "regression-based" actually means for your Q4 budget
A regression-based MMM works by breaking down your revenue into separate parts: a baseline component representing organic demand (trend, seasonality, holidays) and a media component representing the incremental lift from your ads. These components are estimated independently and then added together. That's the separability assumption, and it's baked into the model.
Under normal conditions, this is workable. During peak seasons—when it matters most for your model to be accurate—it isn't.
The baseline leakage problem
When spend is synchronized with holiday demand, a regression-based model struggles to assign revenue to the right bucket. Holiday lift that's genuinely organic gets partially credited to media because they’re happening simultaneously. The model sees revenue go up when spend goes up, and it can't fully distinguish correlation from contribution. The result is that your channels look more effective than they actually were during peak periods.
This phenomenon is called baseline leakage, and Prescient's research shows that regression-based models consistently attribute too much to holiday effects. These models frequently overstate how much holidays contribute by 20–30%. If you want to go deeper on why this happens and why it's so hard to fix within the separable framework, this piece covers the mechanics in detail.
Forced saturation compounds the error
Regression-based MMMs also impose diminishing returns on your channels by design because they assume your spend is approaching a ceiling. During peaks, you may actually be in a scalable position where spending more would still produce proportional returns. But the model doesn't know that, and it's already bending your response curve downward. That makes your optimization recommendations even more conservative than they should be, and not in a helpful way.
What the research actually found
Prescient's research team, led by CTO and data scientist Cody Greco, tested our model against two widely used open-source MMM baselines using a synthetic, agent-based simulation environment with known ground truth. That last part matters: because the true incremental contribution of each campaign was known by design, the team could measure attribution error directly rather than inferring it from proxies.
The synthetic environment was built to reflect the dynamics of real marketing systems, including nonstationarity, cross-channel interaction, and spend patterns that deliberately correlate with seasonality and peak events. Neither the environment nor its parameters were designed to favor any particular model.
During a peak period analogous to Black Friday and Cyber Monday, the team computed the budget allocation each model recommended and compared it to the simulator's known optimal allocation. The results were stark.
The regression-based baseline (Baseline A) recommended overspending by approximately 81% relative to what was optimal.
This happened for exactly the same reason we expected: these models gave marketing campaigns too much credit for the channel performance and gave the holiday too little credit. Putting that misallocation through an optimizer amplified the issue into an extreme allocation error.
The Bayesian MMM (Baseline B) had the opposite problem, recommending underspending by approximately 11.5%. Prescient’s recommendation deviated from the optimal allocation by approximately 1%.
Why attribution error compounds under optimization
Why does a moderate attribution error produce such an extreme allocation error? The intuition is straightforward once you see it.
Errors in attribution don't stay contained in your reporting dashboard. The whole point of running an MMM is to feed its outputs into an optimization:
- where should we shift budget?
- how much should we scale?
- what should we cut?
When the attribution is wrong, the optimization landscape the model hands you doesn't match reality. Channels look more scalable than they are. Budget constraints look looser. The model is confident in a direction that doesn't exist.
At normal spend levels, this might produce a recommendation that's off by 10–15%. At peak season, when budgets are elevated, competition is high, and every marginal dollar matters more, that same flaw gets magnified. The model is extrapolating from bad attribution into a high-stakes decision, and the error scales with the stakes.
That's why the research found an 81% overspend, not a modest one. The regression-based model wasn’t wildly inaccurate throughout the year, but its misattribution compounded rapidly when dealing with peak conditions and optimization pressure simultaneously.
Where Prescient comes in
Prescient’s model is built differently from the ground up. Rather than breaking out revenue into independent baseline and media components, it models marketing as a dynamical system where these effects emerge together from the same underlying structure. That means we don’t have the same problem that other models face when spend and demand are both increasing or decreasing at the same time. In the same synthetic peak-season test that produced an 81% overspend for the regression-based baseline, our model’s recommended budget allocation was only off of the optimal conditions by approximately 1%.
If you're heading into peak season planning with a regression-based model, or evaluating whether your current MMM is actually serving you when it matters most, we'd love to show you what our model can do. Book a demo with our team of experts to see the platform.
See the data behind articles like this
Get a custom analysis of your media mix
Prescient AI shows you exactly which channels drive revenue — so you can stop guessing and start optimizing.
Book a demoKeep reading
View all
A practical guide to how to lower customer acquisition cost
Read article
What LTV:CAC ratio means and why it matters
Read article
The hidden cost of cutting awareness spend
Read article
Treating paid and organic as separate channels is costing you
Read article
What is media optimization? A guide for performance marketers
Read article
What is agile marketing? A guide for modern marketing teams
Read article