How To Find Optimal Proxy Metrics - Issue 197
Using proxy metrics for measuring and quantifying the impact of a product rollout or a new feature.
Welcome to the Data Analysis Journal, a weekly newsletter about data science and analytics.
Teams often mistakenly evaluate A/B tests against North Star metrics or business KPIs, such as user retention, customer churn, revenue, or LTV. This approach is flawed for 2 reasons:
First, business metrics are not sensitive, meaning any lift created by an A/B test should not be reflected in KPIs (unless your MDE is over 50%).
Second, business metrics are designed to resist short-term changes from A/B tests and to reflect only long-term impacts.
Despite this, using business KPIs to measure the effectiveness of A/B tests remains a common practice. To address this issue, researchers from Google, Stanford, and the Department of Statistical Science at Duke University collaborated on a study to explain why you should use sensitive proxy metrics for A/B tests instead of the North Star or business KPIs.
They introduced the concept of Pareto Optimal Proxy Metrics, which significantly optimize the accuracy and sensitivity of lift predictions. Let’s dive into why business core KPIs aren’t suitable for A/B tests and how to select the appropriate proxy metric for measuring product rollouts or feature optimization.
Product is a subset of the business.
Keep reading with a 7-day free trial
Subscribe to Data Analysis Journal to keep reading this post and get 7 days of free access to the full post archives.