When you choose an index fund for retirement it seems straight-forward and neutral. But it’s not. A market-cap-weighted fund assumes bigger companies deserve more of your money. An equal-weight fund assumes every company deserves the same allocation. Neither is neutral; each embeds beliefs about how markets work.
Marketing mix models work the same way. When you adopt an MMM, you’re getting measurement, but you’re also tacitly accepting assumptions about how marketing works that have been baked into the model’s structure. For the users of those MMMs, those assumptions can feel invisible until they start costing you money.
If these assumptions don’t match your reality, your model will systematically misattribute credit and recommend budget allocations that fail in practice.
Key takeaways:
- Every marketing mix model is built on assumptions about how marketing works, whether you examine those assumptions or not. These beliefs determine what your model can see, what it misses, and where it tells you to spend.
- Traditional MMMs assume 1960s broadcast dynamics that don’t match modern digital marketing, forcing separable baseline effects, universal saturation, and channel independence even when reality contradicts these patterns.
- Open-source MMMs improve on legacy models but still optimize for computational convenience rather than accuracy, forcing response curves and decay patterns that may not reflect your actual marketing dynamics.
- Prescient built the Nine Laws of Marketing as explicit assumptions grounded in observable patterns: halo effects, funnel directionality, seasonal efficiency variation, temporal propagation, heterogeneous saturation and decay, and external modulation.
- Before trusting any MMM’s budget recommendations, ask what assumptions it makes about your marketing. If those assumptions don’t match your reality, you’re measuring performance against someone else’s version of truth.
The invisible assumption problem
Every model makes assumptions. It sounds like a flaw, but it’s fundamental to modeling.
The issue isn’t that MMMs make assumptions. It’s that most marketers don’t know what assumptions their MMM makes.
They see outputs (channel contributions, ROAS estimates, budget recommendations) without understanding the logic that produced them. When recommendations don’t align with their business knowledge, they question their own judgment rather than the model’s assumptions.
This creates a dangerous dynamic in which million-dollar decisions are guided by assumptions the decision-makers haven’t examined and might not agree with if they understood them.
What traditional MMMs assume
Traditional MMMs were built for 1960s broadcast media. Their core assumptions:
- Separability: Marketing effects and baseline demand can be cleanly separated. But your campaigns deliberately launch during high-demand periods like holidays, seasonal peaks, promotions. When spend and baseline move together, the model can’t reliably tell them apart. Attribution becomes mathematically under-identified.
- Universal saturation: These models assume that all channels saturate the same way. They don’t. Audiences are different, campaigns are different. Campaigns can have multiple efficiency peaks.
- Stationarity: Marketing efficiency stays constant over time. But the same $10,000 delivers different results in different contexts. July faces lower purchase intent. December benefits from holiday urgency. Models absorb these changes into baseline terms, missing real marketing dynamics.
- Channel independence: Each channel works in isolation. But channels interact constantly. Your YouTube awareness makes retargeting more effective. Your podcast sponsorship drives branded search. Upper-funnel campaigns appear to underperform because downstream impacts aren’t captured.
These assumptions made sense for TV spots and print ads with limited targeting, fixed inventory, and coarse measurement. Modern digital marketing violates every one of these conditions.
What open-source MMMs assume
Modern frameworks like Robyn, Meridian, and PyMC-Marketing represent improvements. They use features like Bayesian methods, time-varying parameters, and sophisticated response functions. But they still make limiting assumptions for computational tractability:
- Fixed parametric response families: All curves fit Hill, Weibull, or Michaelis-Menten functions. These force saturation by the way they’re constructed. When your actual marketing performance doesn’t fit these preset patterns, the model gives credit to “baseline demand” instead of your campaigns.
- Geometric adstock: Effects decay at constant rates. But retargeting has fast decay. Podcast sponsorships build slowly over weeks. Brand campaigns last months or years. Forcing everything into set geometric patterns creates misattribution.
- Separable decomposition: Outcomes split into independent baseline and media contributions. But this creates identifiability problems when budgets are timed with demand, which they always are.
- Regularization as solution: These models use a statistical technique to handle channels that run at the same time. But this technique doesn’t actually solve the attribution problem, it just picks one way to split credit among many equally valid options. The choice reflects modeling preferences, not causal truth.
These trade-offs work for research tools. But for million-dollar budget decisions, computational convenience is a dangerous foundation.
Prescient’s Laws of Marketing
We started with a different question: what do we consistently observe in how marketing actually works? Our Marketing Laws are empirically motivated constraints that reflect real marketing dynamics. (You can read through all of them on our laws of marketing page if you want to understand each in depth.)
These laws cover the fundamental dynamics we see play out in real marketing systems. Some address how channels interact with each other, like how awareness campaigns create spillover effects that show up in organic search and direct traffic, or how upper-funnel investment influences the efficiency of your conversion tactics months later. Others focus on timing and context, recognizing that the same dollar spent in July performs differently than in December, or that different products have wildly different purchase cycles that standard attribution windows completely miss. Together, they form a framework that reflects how modern marketing actually works rather than the assumptions made by traditional or open-source MMMs.
These are still assumptions. But they’re grounded in observable patterns rather than computational convenience.
How to evaluate any MMM’s assumptions
Before trusting any measurement tool, ask these questions:
- Does it assume separability? Can it distinguish baseline from marketing when campaigns launch during peak seasons? If it “cleanly separates” effects, be skeptical.
- Does it force saturation? If your channel shows linear returns, will it still cap your spend? These mathematical formulas are designed to always show diminishing returns, even if your channel is actually performing efficiently at higher spend levels.
- Does it treat channels as independent? Can it see that awareness makes retargeting work better? Additive models (which see elements in isolation and then adds them together) miss interaction effects.
- Does it assume constant efficiency? Does it think July dollars equal December dollars? Stationary assumptions misattribute seasonal dynamics.
- Can it represent your purchase cycle? Does it measure on timelines matching how long customers actually take to buy? Short windows undervalue high-consideration campaigns.
- What happens when assumptions fail? Does it adapt or force data into preferred structures? Rigid models produce incorrect explanations when assumptions break.
The bottom line
You can’t avoid assumptions because they’re in every MMM. What separates helpful assumptions from unhelpful ones is whether they align with how your marketing actually works.
Traditional MMMs assume 1960s broadcast dynamics. Open-source MMMs optimize for computational convenience. Prescient assumes empirically observed modern marketing dynamics.
None are perfect. (If a company could perfectly capture reality, they’d be raking it in on the stock market.) But some are more grounded in reality than others.
Before your next budget planning cycle, ask your measurement provider: What assumptions is your model making about how my marketing works?
If they can’t tell you, or if those assumptions don’t match your reality, you’re optimizing against someone else’s version of truth.
The most expensive assumption is the one you don’t know you’re making.