Listen to this article

Beyond attribution

Editor's note: Maksim Zhirnov is a performance marketing manager at Yandex. He can be reached at mzhirnov@yandex-team.com.

In today's privacy-first marketing landscape, attribution models are losing their grip on reality. While last-touch, multi-touch attribution (MTA) and data-driven attribution (DDA) tell us which touchpoint a “gets credit” for a conversion, they can't answer the fundamental question that keeps CMOs awake at night: Are our campaigns actually driving new business or are they just getting credit for sales that would have happened anyway?

The answer lies in incrementality testing – controlled experiments that measure the true causal impact of marketing activities. This methodology is rapidly becoming the gold standard among brands from Uber to HelloFresh, who use it to optimize their multibillion-dollar media investments.

The attribution blind spot

Traditional attribution models suffer from a fundamental flaw: they assume correlation equals causation. When a customer clicks an ad and then purchases, attribution gives that touchpoint full or partial credit. But what if that customer was already planning to buy? What if they would have found your product through organic search instead?

The privacy changes accelerating this crisis include:

  • iOS 14.5+ reducing mobile tracking accuracy by 15-30%.
  • Third-party cookie deprecation affecting cross-device measurement.
  • Platform algorithms becoming increasingly sophisticated at targeting high-intent users (inflating attributed performance).

As one performance marketing director at a major e-commerce brand put it: “Our attribution was telling us that every channel was profitable but our overall growth had flatlined. We were optimizing for vanity metrics, not actual business impact.”

What is incrementality testing? Incrementality testing applies the scientific method to marketing measurement. Instead of asking “Who gets credit?” it asks “What would have happened if we hadn't run this campaign?”

The methodology splits your audience into two statistically similar groups: test group (exposed to your marketing campaign) and control group (withheld from exposure or shown a placebo).

By measuring the difference in conversion rates, revenue or other KPIs between these groups, you can isolate the true causal impact of your marketing spend.

Key benefits: eliminate wasted spend on cannibalistic tactics; defend budget allocations with causal proof, not vendor dashboards; optimize for true incremental return on ad spend (ROAS) not reported ROAS; future-proof measurement strategy (requires no third-party tracking).

The five-step rapid testing framework

Step 1: Define your hypothesis and success metrics. 

Start with a specific business question:

  • Does our retargeting campaign drive new sales or mostly capture customers who would convert organically?
  • What percentage of our branded search revenue is truly incremental?
  • Will scaling our TikTok prospecting budget bring in profitable new customers?
  • Select meaningful KPIs: incremental conversion rate; incremental revenue per user; incremental return on ad spend (IROAS); incremental customer acquisition cost (ICAC).

Step 2: Design your test and control groups. 

Your approach depends on the channel and campaign type:

Geographic experiments: Pause or modify spend in selected markets (DMAs, cities or regions) while maintaining normal activity elsewhere. Ideal for brand campaigns, local businesses or omnichannel measurement.

Audience split tests: Divide customer segments for email campaigns, retargeting or lookalike audiences. Critical requirement: maintain complete exclusivity between groups.

Platform-native tools: Leverage Meta conversion lift, Google campaign experiments or TikTok's measurement solutions for automated control group creation.

Pre-flight validation: Ensure test and control groups show similar historical performance trends (correlation coefficient R2 ≥ 0.9 is the benchmark for valid randomization).

Step 3: Execute the intervention

Run your test for a minimum of one week, though 4-8 weeks provides more robust results for slower-converting businesses. During the test period:

  • Maintain consistent external marketing activity.
  • Avoid major creative changes or competing campaigns.
  • Monitor for "media leakage" (control group exposure through other channels).
  • Document any external factors that might influence results.

Step 4: Analyze results and calculate lift

Incremental Lift Percentage:

Incremental Revenue:

Incremental Return on Ad Spend (IROAS):

Statistical validation: Run significance tests (t-tests for large samples, Bayesian methods for smaller ones) to ensure results aren't due to random chance. A p-value <0.05 or 95% confidence interval not containing 0 indicates statistical significance.

Step 5: Apply insights and iterate

For positive lift: Scale successful tactics, reallocate budget from underperforming channels or extract winning creative elements for broader application.

For neutral or negative lift: Pause ineffective spending immediately, investigate root causes or test modified approaches.

For ongoing optimization: Feed incrementality results into marketing mix models (MMM) for long-term planning and establish always-on testing protocols for continuous learning.

Case studies

Nutrition brand (DTC): Incrementality testing revealed that TikTok's upper-funnel impact was severely undervalued by last-click attribution. The test showed 6x higher marginal ROI than reported, leading to a $11.8 million budget reallocation that increased total incremental reach.

Beauty brand: Meta's conversion lift study found that switching from attribution-based to incrementality-optimized bidding reduced cost per acquisition by 71% while delivering 3.3x incremental ROAS.

Omnichannel retailer: Split-testing branded search and retargeting campaigns revealed that only 5% of attributed search revenue was truly incremental. The brand reallocated 40% of search budget to prospecting channels, resulting in 25% total revenue growth.

The modern measurement stack

Leading consumer brands now employ measurement triangulation, combining: attribution models for real-time optimization; marketing-mix modeling for cross-channel budget allocation; incrementality testing for ground-truth validation.

This approach provides the speed of attribution, the breadth of MMM and the accuracy of controlled experiments – creating a measurement system that satisfies both marketing teams and finance departments.

Implementation recommendations

Technology solutions: Consider platforms like Measured, Lifesight or Rockerbox for automated incrementality testing, or Meta's GeoLift and Google's Campaign Experiments for channel-specific tests.

Organizational readiness: Ensure buy-in from leadership and establish clear processes for acting on test results. The most sophisticated measurement is worthless without organizational commitment to optimization.

Testing roadmap: Start with your largest-spending channels or those with the most questionable incrementality (often retargeting and branded search), then expand to test creative variants, audience segments and channel combinations.