How To Measure New Feature Adoption - Issue 187
Ways to measure the success of a new feature release or a product adoption.
Welcome to the Data Analysis Journal, a weekly newsletter about data science and analytics.
One of the most common challenges in product analytics is measuring the success of a new feature release or analyzing product adoption - understanding how users discover it, how they use it, and how often. Simple questions can become quite complex when you start breaking them down by timeline, user segments, or engagement layers.
Measuring brand-new feature adoption is my least favorite topic because:
There is nothing to compare new feature performance against - no baseline, no control group, and no easy way to know if the volume of usage you are seeing is good or bad.
Often, new features are announced via GTM campaigns that skew the feature's impact and complicate analysis.
In this publication, I want to share what I have learned about new feature rollouts, show how to run analysis to evaluate its usage, and offer metrics for measuring product adoption.
Teams often confuse feature rollout with feature testing. While the steps and procedures for release may be similar, statistics and analysis are different. Unfortunately, not many understand this distinction.
New feature release vs. feature optimization
I prefer to differentiate between experimentation for introducing a new feature vs. optimizing an existing feature and classify all product tests into 3 categories:
1. Optimizing an existing product or feature:
This is the most common A/B test type. You simply change the color, format, and positioning of a known (existing) feature that doesn’t change the user’s path but is intended to optimize the experience. Usually, these are fast, low-impact tests. It’s rare that you notice a significant conversion change, and most likely, there will be low variance in your results.
2. Introducing a change to an existing product or feature:
In this case, you introduce a change to a known feature that affects the user journey or path. It may be a test to reduce or add more steps for the signup form, reroute users through different page flows, get to your value pitch by adding/removing CTAs, or anything that introduces a new user path.
3. Introducing a new feature that didn’t exist before:
This is the hardest type of experimentation and the most commonly conducted and evaluated incorrectly. It should NOT be treated as an A/B test, and here is why:
Keep reading with a 7-day free trial
Subscribe to Data Analysis Journal to keep reading this post and get 7 days of free access to the full post archives.