Data Analysis Journal

Data Analysis Journal

A/B testing

A/B Test Checklist - Issue 233

A short guide to product experimentation steps and process.

Olga Berezovsky's avatar
Olga Berezovsky
Nov 20, 2024
∙ Paid

Hi everyone, I just realized it’s been 2 years since I shared my A/B Test Checklist. It’s one of the shortest articles in my newsletter but also one of the most popular. I designed it as a quick, handy guide and a refresher on terminology, steps, and key calculations.

This checklist is meant for product and marketing leaders who may not necessarily have a background in statistics but have to work hands-on on experimentation. 

I’ve updated the guide to make it even clearer and more actionable. I hope you find it helpful!

✍️ Steps for conducting a product experiment: 

1. Can you test it? 

Not everything can be A/B tested. New experiences or major product releases may not fit the typical A/B testing framework (read How To Measure New Feature Adoption). Consider potential bias, such as novelty effect or change aversion.

2. Should you test it?

Do you have enough users or events? Without a sample size of at least 50,000 users, A/B test results may not be statistically reliable.

Set your expected rate – this is your Minimum Detectable Effect (MDE). The MDE represents the smallest acceptable difference between the Control and Variant. If the Variant is 0.02% better than the Control, would you still want to run the test? Is it worth the cost and time?

Is it a good time to run the test? If the tested dashboard will eventually be sunsetted, the user flow replaced, or removed, there is no point in testing features that will be removed or deprecated.

Also, consider factors such as seasonality, upcoming version releases, open bugs, etc.

3. Formulate A Hypothesis

The “Newly tested user navigation flow will improve user retention” is too vague to serve as a hypothesis. Instead, consider more measurable:

  • New navigation flow will increase the frequency of user app opens per day by at least 10%.

  • New navigation flow will improve View-to-Paid CVR by at least 5%.

  • New navigation flow will drive at least 10% more DAU to the dashboard.

A well-defined hypothesis should be focused on one or two success metrics.

4. Finalize your set of metrics.

For A/B analysis, I use a set of 3 metrics:

Keep reading with a 7-day free trial

Subscribe to Data Analysis Journal to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Olga Berezovsky · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture