Playbook For Launching, Monitoring, and Analyzing A/B Tests - Issue 134
A framework of product experimentation - the procedure and analysis.
Welcome to the Data Analysis Journal, a weekly newsletter about data science and analytics.
Today, I will share my step-by-step process for launching, monitoring, analyzing, and reporting A/B tests. I’ll also cover the roles and expectations in test monitoring and support between product managers and data analysts, as this area of responsibility can overlap and is often a culprit of tensions.
Through working at different companies, I ended up developing a few different frameworks that were each depending on the data team structure and analyst reporting role. Depending on if analysts are embedded into Product, Business, or Engineering, or if they are part of a team squad or a tiger group, the framework will be different. That said, for today I’ll keep it at a high level to make it applicable to any organizational and reporting structure.
First thing first. I prefer to differentiate experimentation between introducing a new feature or optimizing an existing feature and classify all product tests into 3 categories:
Optimizing an existing product or feature.
Introducing a change to an existing product or feature.
Introducing a new product or feature that didn’t exist before.
While these sound similar (and in many, many companies are treated the same), they actually have different lifecycles and rollouts and should be evaluated differently.
Keep reading with a 7-day free trial
Subscribe to Data Analysis Journal to keep reading this post and get 7 days of free access to the full post archives.