Product managers’ beliefs about a feature and its benefits for users don’t always align with the customer’s actual experience. But how do you test your assumptions against that reality? One of our customers, an accounting software company, used software usage analytics to track all its major product features and how they were being used by its customers.
Event tracking uncovered interesting trends, including the fact that the “killer” rolling budgets feature — developed at a significant cost to the company — was not being used by customers until a month or more after product purchase.
Why aren’t trial users taking advantage of our ‘killer’ feature?
If customers aren’t using this feature until a month or more after they purchase the product, it is unlikely that prospects will use it during the critical trial period. To figure out the best way to promote the usage of this feature during evaluations, the engineering team ran an A/B test by deploying two separate builds of the software — each of which provided a different visible method to access and use this rolling budgets capability.
Product management tracked and studied which version led more people to use the feature and adopted that method within the UI of the next product release. As a result, adoption of the rolling budgets feature increased dramatically among trial users, leading to a higher conversion rate.
Interested in learning more about software analytics in action and real-world results product managers have achieved? Download our latest ebook, Take the Guesswork out of Product Management: Building Better Applications with Software Analytics.