Beauty brand increases BFCM performance through a lead-up strategy
By: Chen Ma, Head of Advertising Experimentation
In this study, Amazon Ads researchers conduct an A/B test to evaluate whether activating a lead-up campaign ahead of a tentpole event (Black Friday and Cyber Monday) can help drive greater consideration and conversion during the events.
The challenge for a beauty brand advertising with Amazon Ads
A German beauty brand advertising on Amazon in 2021 wanted to know how they can improve their ad campaign performance during the Black Friday and Cyber Monday (BFCM) event. Specifically, they wanted to know how they can improve their detail page views, purchase rate, and new-to-brand purchase rate.
The execution: Beauty brand runs A/B test on Amazon DSP display campaigns
To realize the results, we ran an A/B test between two strategies during BFCM to determine which performs better:
- Control strategy: Display ads delivered through Amazon DSP during the event only
- Test strategy: Display ads delivered through Amazon DSP in the lead-up to BFCM (two weeks earlier) and during BFCM
To test the effect of advertising in the lead-up to BFCM versus advertising during the event alone, we compared the difference in detail page views, purchase rate, and new-to-brand purchase rate on a single product Amazon Standard Identification Number (ASIN).
Activating lead-up campaigns ahead of BFCM drives greater consideration and conversion
Note: These performance metrics are based on a single advertiser at one point in time, and results may vary due to contextual and seasonality differences.
We found that activating a lead-up Amazon Ads campaign ahead of BFCM drives greater consideration and conversion during the event period.
The Test campaign outperformed the Control campaign in all three metrics.
The Test campaign outperformed the Control campaign
Detail page view rate
Purchase rate
New-to-Brand purchase rate
Benefits of running a randomized controlled experiment (e.g., A/B test, multivariate split test)
- Maximized learning: Eligible advertisers can seamlessly run an experiment using a portion of their annual budget. By adding a test to the media plan, advertisers can gather scientific insights that go beyond standard campaign performance reporting.
- Customized insight: Advertisers can generate customized insight by testing with their actual campaigns. The experiment insight can further help validate Amazon Ads recommendations.
- Experiment rigor: Through randomized controlled tests and statistical analysis, we establish causality between the implemented strategy and outcome.
Methodology
- Translate the business problem to an actionable hypothesis: (1) Does advertising in the lead-up to BFCM improve detail page view rate, purchase rate, and new-to-brand purchase rate?
- Define measure of success: (1) Statistically significant lift driven by the multi-touch treatment(s), demonstrating the value of display advertising through Amazon DSP in the lead-up to BFCM. (2) Statistically significant lift driven by one of the multi-touch treatments. (3) A significant finding suggests the observed lift is likely driven by the treatment instead of chance. A test result that is not statistically significant should be treated as insufficient evidence to claim that a treatment effect exists and logically defaults to the null hypothesis, claiming no difference between the tested variations.
- Design the experiment: (1) In this experiment, the Test strategy contained a two-week lead-up campaign followed by a BFCM campaign from November 11, 2021, to November 29, 2021. The Control strategy ran as a stand-alone BFCM campaign from November 25, 2021, to November 29, 2021. (2) The Test strategy leveraged a lead-up remarketing campaign, which promotes products to relevant audiences. (3) Most elements (tactic budget, promoted and featured ASIN, creatives, frequency, bids, etc.) of the DSP setup are mirrored across treatments. Budget optimization is turned off during the test—as budget shifts can induce variances beyond what can be defined by the tested variable.
- Identify experiment key performance indicators (KPI): In this test, detail page-view rate, purchase rate, and new-to-brand purchase rate are the primary metrics tested. We can measure the effect on other metrics, but the chances of detecting significant lifts are lower since the test is not sized for the secondary KPIs.
- Experiment sample-size estimation: In general, a larger sample size better enables a test to distinguish a true effect from random noise. Given a limited budget, we need to arrive at a desirable sample size that balances the cost and benefit of detecting a statistically significant result. We estimate the sample size by running a power analysis, which is based on 80% to 90% statistical power, 95% confidence level, 5% significance level, KPI baseline, and minimum detectable effect. These sample size estimates represent the recommended minimum for statistical significance.
- Set up experiment: Amazon Ads splits an audience into mutually exclusive groups to prevent cross-contamination.
- Review results: On average, experiments run for four weeks. The Amazon Ads experimentation team monitors the mid-test progress and provides an end-of-test analysis after the attribution window closes. Advertisers can incorporate experimentation learnings into future campaigns to improve performance.
Source: Amazon internal data 2021