Why You Shouldn’t Stop A/B Tests Early - Issue 109
Why significance matters, and how long you should run an A/B test
Hello analysts! Today is Wednesday, and I am back as usual with another edition of my Data Analysis Journal - a newsletter about data analysis, data science, and product analytics.
I ended August with a bang by publishing my first guest post How to measure cohort retention at Lenny's Newsletter, which is the #1 business newsletter on Substack with over 200,000 subscribers 🤯⭐. Make sure to check it out and subscribe if you haven’t yet! This is the best newsletter about building products and driving growth.
My publication was also featured in The Analytics Engineering Roundup run by the dbt Labs team! Join their growing community of over 15K data practitioners to learn more about analytics engineering.
Today I wanted to cover the most commonly asked questions on experimentation in analytics:
How long should I run an A/B test? It’s recommended for 2 weeks, but why?
Can I (or should I) stop an A/B test early?
If I have to, what is the safest approach to handling fast A/B tests?
Why are slow rollouts dangerous?
What is the recommended procedure for gradually launching A/B tests over time?
Keep reading with a 7-day free trial
Subscribe to Data Analysis Journal to keep reading this post and get 7 days of free access to the full post archives.