Revenera sponsors a series of Product Management Today webinars featuring innovative ideas from top software product management thought leaders. In these blog posts, we ask the presenters to share their insights – we encourage you to watch the full on-demand webinars for even more details.
In his webinar, experienced B2C and B2B product leader Tim Herbig, author of Lateral Leadership: A Practical Guide for Agile Product Management, shows how to be a data-informed product manager. You’ll learn where analytics should and shouldn’t fit in your toolset, how to avoid common mistakes PMs make with analytics, and how to use quantitative and qualitative data symbiotically to serve customers and meet your KPIs. For an even deeper dive, be sure to check out Tim’s Product Discovery masterclass.
Why should product managers care about analytics, and where should it fit into?
The greatest value of analytics is: you can use them to beat the HiPPO.
HiPPO stands for Highest Paid Person’s Opinion. That’s typically someone on the Board, or the CEO, whose views ordinarily beat ideas coming from the bottom up. Analytics can help you bring objectivity to make better decisions about what to build and whether what you’ve already built is working. If you only rely on gut feelings, the HiPPO will make the ultimate decision – and you want to avoid that.
Here’s my personal view of which tools play key roles in each phase of a product development cycle:
At the top, you need a very strong product vision and strategy that is agile enough to react to changes in user needs. Once you’ve established that, it’s about the symbiotic relationship of product discovery (building the right thing) – with product delivery (building the thing right, in other words, execution).
At the lowest level, there are plenty of tools, but you typically do user interviews during product discovery, and something like user story or journey mapping during product delivery to slice your concept and translate it into tangible releases. While user interviews don’t play much role in delivery, and story mappings don’t play much role in discovery, analytics glues the whole thing together.
Analytics is super important in discovery to determine the current state of product usage, and help you articulate some first assumptions about what kind of problems users might have with your product. At the same time, analytics also help with the delivery process. You can use analytics to determine: OK, is this thing we rolled out really working for our users? Are we really hitting the KPIs we set out to achieve? So it plays a key role in both pillars that product managers need to care about every day.
You distinguish between being “data-informed” vs. “data-driven.” What’s the difference?
As this Venn diagram shows, being “data-informed” means you operate at the intersection of pure data, your own gut feeling — which is a pretty powerful and educated tool you have as a product person — and everything else, including design thinking, qualitative data, user research, and user experience tools and workshops.
That compares with the right side of the diagram, where the whole circle is occupied by data: you simply focus on the numbers and you don’t look left or right. You think in the data lies the truth; you see gut feelings as pretty much a necessary evil to overcome.
Being data-informed is like driving a car with a heads-up display. You’re still looking at the real world out there, and you still trust your gut for responding spontaneously if someone crosses the street in front of you. But you have enhanced information for more informed decisions. Whereas being data-driven is like putting on the VR goggles and trusting the computer to simulate the road for you. If you take off the goggles, you’ll have a much clearer view of what’s really going on.
Here’s one of my favorite examples. Years ago, Netflix debated whether they should send out a reminder email to people at the end of their trials to say your trial is about to end, your credit card will be charged next week. And they A/B tested it, and of course the pure A/B test numbers showed they shouldn’t roll out this email, because the group that received it was much likelier to cancel their trials. But Netflix also ran qualitative interviews to look at brand perception and loyalty, and ultimately decided to roll out the email. Short-term, that led to a drop in the bottom line. But in the long term, users who canceled trials after receiving the email were much likelier to come back later, and to jump directly into a paid plan. Netflix played the long game and invested in the right decision by combining multiple tools upfront to operate in a user-centered way at scale.
Aside from relying solely on quantitative analytics, what other pitfalls do product managers fall into with analytics?
First, don’t get caught up in microconversions: super-tiny metrics like a button click that don’t really contribute to key metrics like overall revenue or user growth.
Second, most analytics numbers tell you what people are doing, not why they’re doing it.
Third, don’t leave interpretation of your data to your stakeholders or teams themselves, because everyone will see what they want to see. It’s important that you give them the right context and suggest a reasonable interpretation of the data.
Fourth, don’t confuse today’s behavior with tomorrow’s potential product. Don’t let today’s metrics hold back your vision.
Now that you’ve given us the “don’ts,” what are the “do’s”? When should we use analytics?
First, think of analytics as a great tool for sanity checking your first assumptions about the user’s problems. If you have some very early ideas of what users might be struggling with, jumping into your analytics tool of choice can give you some preliminary answers, to create a foundation for your first hypotheses.
Analytics can help you bring objectivity to your understanding of the outcomes of a product or change — especially when you’re working with C-level decision-makers. They’re also ideal for monitoring your product’s ongoing performance, and spotting any behaviors of your product that might be broken.
When people talk about analytics, a whole lot of terms get thrown around. Before we go further, let’s get a shared understanding of the key benefits of these tools and when to use them.
First, there’s web/app data analytics: tracking tools most commonly used by the product or analytics manager. These do front-end tracking. They tend to be pretty accessible, though they may provide some expert-level data for analytics managers. These tools simply mirror current behavior.
Second is simple A/B testing, typically used by product managers or conversion optimization managers. Depending on the company’s maturity, those might be separate roles, or a skill the product manager owns; or maybe they’re the responsibility of the UX department. A/B testing helps you decide which design or version performs better, but lots of craftsmanship is required to do it right.
Third are heatmaps. They’re often not considered as analytics tools, but they are extremely helpful for answering specific questions. Very little expertise is needed to look at a heat map and distill some information, such as where did people scroll, or click, or move the cursor? Heatmaps are incredibly valuable for spotting really obvious usability flaws. In one product, we hadn’t made images clickable, and we saw people were clicking on them all the time, so we tried to think of a beneficial action that should happen if you did. You wouldn’t get that information from a typical analytics tool or even from a typical usability interview: you need the visual cues to spot it.
Next is your data warehouse, sometimes called business intelligence, for back end tracking. This is the true backbone behind all your analytics tools. It lets you unify data from many sources to form one holistic user profile. It’s mostly used by data analytics managers, though if you’re a more technical product manager, you might run some queries there yourself. Data warehouses are great for making more granular user selections for A/B testing, and for doublechecking data from front-end analytics tools which tend to be a bit more error-prone.
Last is data visualization. This makes analytics data tangible, enabling you to combine input from various data sources and analytics tools into a super-flexible dashboard. Advanced data visualization tools have unmatched flexibility in building custom dashboards, so you could draw on the same datasets to create different dashboards for the C-level, the UX manager, analytics manager, and product owner.
More A/B testing is better, right?
Actually, no. On the surface, A/B testing is so tangible and the key message is so easy — we can find out which is better – that people do fancy A/B testing just for the sake of it. But you need to focus your A/B testing where it will help you make significant progress on your business metrics.
Here are two red flags that you might be doing A/B testing for its own sake, not for a business reason:
Lack of hypotheses. If there’s no clearly articulated, shared hypothesis about why a certain version should be rolled out, you’re probably not doing A/B testing right.
Used to resolve conflicts. Often, teams with conflicting goals do an A/B test instead of settling the conflict. This leads to tests with only marginal differences: very low contrasts in visuals or behavior. By focusing simply on reaching consensus, you miss opportunities to make greater progress on your business metrics. These conflicts are sometimes a sign that people aren’t incentivized to focus on the metrics that matter most.
When you think about moving a significant KPI, also think about your Key Failure Indicators (KFIs). What countermetrics should you be looking at? For example, you want to drive upsells, but your KFI might be an increase in product support costs or user complaints, or costly churn as a side effect of too-aggressive upselling. Look left and right beyond your test’s core KPI, not just straight ahead.
Another A/B testing pitfall is to share your results too early. A/B tests always take time to become valid, and stakeholders are impatient. Don’t give in to nagging. If you show an interim result after three days, people say: my version is winning, we’re done, let’s roll this out and make the money.
Always make your statistics and significance calculations visible, so stakeholders don’t think you’re hiding anything. When you’re super-transparent, you can educate people about the time actually required to run a successful A/B test.
Don’t rely solely on calculators. While determining if an A/B test was successful requires some calculation, and significance calculators can help, they only look at data you feed them. They don’t reflect the big picture, including all environmental factors. For example, you might have competing A/B tests running concurrently, and they may heavily influence your own test’s results.
As a rule of thumb, every A/B test should run for at least two weeks, to fully capture weekly cycles and fluctuations. Aim for at least 500 conversions per variation — ideally 1,000. Look for a significance of 95% or more. If you can’t reach these levels, you probably shouldn’t run that A/B test. Run fewer tests, and run them longer.
Another pitfall is focusing on the wrong KPI. Most products have a certain funnel: acquisition, registration, activity, conversion, churn. Many companies can’t test at the bottom of the funnel where the true money is made, so they test microconversions up at the top or the middle. But you can’t connect the KPIs associated with these A/B tests to your overall bottom line.
If your test succeeds in increasing the 20% at the top of the funnel – more people coming to your website – don’t assume that increase carries through to where the money’s made. Only in an ideal world would you be acquiring 20% more users with the exact same engagement and interaction and willingness to pay as the original group. A top-of-funnel increase typically leads to lower bottom line conversion rates because the quality doesn’t carry through.
A potential solution is to always invest a bit more time in letting the test run and monitoring the cohorts. Watch their behavior over time and see if there’s really an impact on the big metric – for example, revenue.
One more pitfall: A/B test results don’t equal user feedback. They don’t tell you why people are doing certain things. Maybe you only lifted your numbers in a certain version because you locked people super tightly into the checkout process. It doesn’t mean they like your product more. You’ve just forced them to perform better, and the reality can only be acquired with quality face-to-face user interviews that dig deeper.
Finally, A/B tests don’t help you test new business models or disruptive, innovative ideas. If you’re at the beginning of product discovery you still have to use other experiments to validate your ideas.
Remember, qualitative (attitudinal) validation tends to help answer questions like what do people need? What are the problems they’re having? How are they solving those problems currently? Quantitative (behavioral) measures help answer questions like do people want the product? Which design works better?
Everyone knows there’s a discrepancy between what people say they’ll do and what they’ll actually do. People will say, I’d totally buy this product, I love it. You’re interviewing them and they like you and you gave them a cup of coffee. But at home when they have to enter their credit card data they might think twice about spending $10 a month on your subscription. Does that mean I have to give up Spotify or Netflix?
What do you do when you can’t build a feature before you test it?
My team was tasked with developing a completely new feature for a new user group. It would have taken six months to build it. To do that and show it to 50% of our users wouldn’t have been very lean. So we used our data warehouse to extract certain events we thought would resonate with our users, manually translated them into an email template, and sent out fake emails to our target groups. Then, we compared open and click rates with benchmark emails. We sent three fake emails, because if we’d sent only one, higher click rates could have been attributed to novelty. By sending three, we got a true average that helped us quantitatively determine which content types in our new product would be most interesting to users, and made a strong argument to proceed with development.
Are the challenges different with B2B products?
Most B2B enterprise products aren’t blessed with the data and traffic of a B2C product. So you get caught up searching for something that’s not there. You’re tempted to use all your fancy B2C metrics and say, wow, I got five more clicks this week, that totally must be due to the last feature we released.
For B2B, I’ve learned to focus on only a few key metrics, like weekly active users or subscription revenue, and to combine quantitative data with qualitative user feedback. You could use Net Promoter Score for qualitative user feedback even though it’s controversial; or pick another user feedback mechanism to help determine what resonates with your audience. In B2B, I’m convinced that ten well-executed, well-recruited user interviews beats obsessing over clicks of drop-down menus or other micrometrics.
Can you talk a bit more about capturing the user feedback you need?
Make capturing user feedback a core part of your team’s processes. At Iridion, we make all user feedback available through one Slack channel, whether it’s sales conversations, requests for demos, demo followups, technical requests, or anything else. I encourage development team members to check this channel a couple of times a week.
They don’t have to do customer support themselves, but I want them to be as close as possible to our users, to understand the language users speak and the issues they have. I’ve found developers become proactive about it – and we get way more informed debates and decisions. Everyone feels more connected to users: it’s way more tangible who they’re building stuff and solving problems for.
The person closest to the user should make the call, and I want to move my whole team as close to the users as I can, so they can make the call — instead of the HiPPO problem, where whoever has the highest salary makes all the decisions.