7 Comments

Great read! Would you advise against replacing "all existing users" with active users? For example, % of users who used in feature in April / monthly active users in April. Thanks.

Expand full comment

Yeah, it will make adoption rate higher, because your denominator will be smaller, thus, more precise.

It will also depend on the audience type and size. Sometimes teams rollout a new feature to a specific user cohort that can be a small fraction of active users. In this case, your denominator should be only those users who are eligible to interact with the this new feature.

Expand full comment

Makes sense. Thanks for the response.

Expand full comment

Cool article, great material! Thank you!

In this article, there is also more about the product adoption process https://gapsystudio.com/blog/product-adoption-process/, about the main stages of the process and methods to make them effective.

Expand full comment

Hey Olga. This was a great read. I loved the Product adoption metrics part which helped me in one of my recent interviews. Thank you so much for this blog and all the great reads and events you have been sharing. It has definitely helped me in refining my analytical skills. :)

I came across this case study question as I am learning more about how A/B tests help in analyzing the success and engagement of new product feature releases.

Say you're conducting an A/B test for a new product feature in a B2C company, and your test cycle is typically 5 weeks. Two weeks has passed so far and your results are as follows:

Week 1: The conversion rate of your Control group is higher

Week 2: The conversion rate of your Test group is higher

Now the executive leadership team wants to know how the test is performing since they want to decide whether to release the feature or not. The leadership team is not willing to wait for the test to run for 5 whole weeks, since they would like to take this business decision a lot sooner;

The question is how will you present you analysis to the leadership team and how will you explain the change in conversion rate between your test and control groups?

From your experience, I would love to hear your thoughts on this especially on how you would present your analysis to the executive team and what other data points would you take into account while answering follow ups from your leadership team.

Looking forward to hearing from you. :)

Expand full comment

Hi Sangamitra,

Thank you so much for reading the journal 😍!

First thing first, as an analyst, you do not present any data from the A/B test until it’s reached significance. Results could be fluctuating a lot one way or another (that’s why you see different results every week), and you should not make any assumptions, reports, or decisions on it just yet.

The ideal process that I encourage analysts to follow is:

1. Formulate a test hypothesis and define the desired audience.

2. Run the initial A/B test analysis estimating (1) the baseline conversion, (2) the sample size needed, (3) the time frame for reaching significance. Based on your findings, you should inform the product team of the expected test timeline to avoid cases when the timeline is too long, and the team can’t wait for the test to reach significance before the feature launch.

3. Run an A/A test with a small % traffic to validate the test set up and the data flow.

4. Run the A/B test occasionally monitoring it’s set up.

5. Stop the test once you reached significance. Do not keep it open until you reach the sample size, otherwise, you might run into data pollution.

6. Evaluate the findings and present the test outcome.

A/B One-Pager: https://dataanalysis.substack.com/p/ab-one-pager

Now, the case you described is very common, and it happened because Step 2 was missing. Here are your options:

- Communicate to the leadership team that the test is not significant, and you can’t make any decisions yet based on its inconclusive findings (could be painful but must be done).

- If you are running the test to 15% or 25% traffic, you can expand the sample size to 50% or 75% traffic (if appropriate) to reach significance faster.

- If the above step is not an option, you can try to use another conversion for test evaluation or change the MDE (Minimum Detectable Value - https://splitmetrics.com/resources/minimum-detectable-effect-mde/) to see if data becomes significant and then report results.

When you report results, it’s a good practice to evaluate them against top-level company metrics to show the test impact on (1) revenue and (2) MAU or retention (if appropriate). For the second layer of metrics, you can pick the frequency of new feature usage, discoverability, usage volume (adoption rate), etc...

Also, rarely the Variant shows significant improvement. In 90% of cases, it performs very close to Control with slight or no difference. Often product teams chase this 0.05% improvement and spend a lot of time on the test set up instrumentation, evaluation... That’s why, for an analyst, it is very important to understand the test hypothesis, the whole product picture, and the test specifics to provide the right recommendation.

Hope, this helps!

Expand full comment

This is awesome. Thank you so much for your detailed step by step analysis of the A/B testing part.

It was interesting to learn how you would explain your current results and also tie to the top level company KPIs.

I also just checked out your A/B testing one pager. It was great head start into the different aspects that needs to be considered in your initial hypothesis. Thank you for this.

Expand full comment
Error