2 Comments
author

Hi Nadya, thank you for reading my newsletter.

What teams usually do: they launch a feature to a small % traffic, let's say 25% total traffic, and compare metrics lift to other 75% users who don't have a new feature, and call it "A/B test". They think they compare Control (no feature) vs Variant (new feature). But in fact, this is not A/B test but simply Split test. There is a big difference.

You can't A/B test a new feature, because you obviously do not have any data yet to work with to prepare for the A/B test launch: you can't estimate MDE, test timeline, expected lift, or baselines. For A/B test as Bayesian, you need to set your statistics right. You do not always need this for split tests.

My recommendation is to launch a new feature to small % of traffic and monitor adoption metrics and sensitive user usage metrics (conversions, clicks, views) for a few days. Once you have confidence the new feature is not harmful (e.g., doesn't take traffic away from other features, positively affects activity), then you can expand the rollout to all users.

Once you have a month of data or more, run a Pre/Post analysis against your ecosystem metrics or top KPIs.

Expand full comment

Hi Olga - Thank you for writing this. When you are launching a new feature (3rd category from above), you talked about monitoring KPIs (like LTV) and adoption metrics (like % of users using the new feature) and these adoption metrics will drive long term KPIs.

Do we still conduct an AB test with primary metric as one of the adoption metrics and compare test vs control results, let's say after 1 month of monitoring? Or are you saying it does not make sense to do AB test in this scenario at all. Instead we need to look at these KPIs and adoption metrics over a period of time with the new feature? In that case, what do we compare that against?

Expand full comment