Revenera logo

Revenera sponsors a series of Product Management Today webinars featuring innovative ideas from top software product management thought leaders. In these blog posts, we ask the presenters to share their insights – we encourage you to watch the full on-demand webinars for even more details.

In his webinar, leading product development consultant Sam McAfee, author of Startup Patterns identifies problems product development organizations often face in measuring their efforts, and offers practical advice for tackling them. You’ll find specific guidance for measuring what really matters, and moving towards metrics that actually help you succeed.

What organizational problems stand in the way of getting measurement right?

First, we’re drowning in data. There are an enormous number of factors you can measure, and it’s difficult for teams and leaders to choose what to measure.

A related problem is the “streetlight effect.” It’s the old joke: it’s easiest to look under the streetlight for the wallet you dropped, but if that’s not where you dropped it, that won’t be very effective. The product development equivalent is looking for data where it’s easiest to find because you’re already recording it, rather than thinking carefully about what you should be measuring.

Next, product teams and managers are smart, high-performing folks, but they can be cavalier about how they generate hypotheses. A lack of rigor in how we construct experiments creates many measurement problems.  Related to that: it’s easy to convince ourselves we already have the answers and we’re just seeking justification for them. We aren’t honest as with ourselves as we could be.

Finally, a lot of teams – especially in larger organizations — are misaligned on priorities. We may be measuring different things on different teams in different siloes, rather than coming together and aligning our priorities.

Can you explore the problem of data overload?

Product managers have a dizzying array of analytics tools. I work with both large organizations and startups, and because data and metrics are such popular topics, by the time startups have their first version of product in market they probably already have multiple analytics tools wired in. Google Analytics has extensive functionality out of the box. They may be using Mixpanel, and infrastructure data tools like Splunk. Our marketing tools – MailChimp, CRMs – are throwing off dashboards. There’s data washing over us all day long.

That creates a secondary problem which is common with lots of process-related tools. It’s easy to let the tool’s design drive what we care about. I saw this a lot in the agile community: we use a tool to track our user stories, tasks, and so forth, but its shape and structure winds up driving our process decisions when we ought to come up with our processes and then configure our tools to work that way.

There’s a lot of inertia when you install a tool. So it’s important to watch out for signs that we’re just following the defaults, rather than getting the tool to serve our real needs.

Also: it’s easy to look at all these shiny dashboards and widgets. There are so many queries we can run, we can spend hours going down every little rabbit hole. It’s tempting. But that’s not a focused and targeted way to use your time.

You mentioned the old joke about the “streetlight effect.”

As I said, we tend to look under the streetlight where it’s easiest to see, rather than over in the dark where we actually dropped our wallet. There’s a cognitive bias where we want to look at what’s easiest to measure, or what we already measure, instead of asking: What are we actually trying to learn? What do we need to measure to learn it? What measurement systems and tooling do we need to measure that?

This relates to the next problem: lack of rigor.

In technology and product development, we talk a lot about taking experimental approaches – agile, lean, design thinking. We say we’re using the scientific method. But the scientific method is intended to ruthlessly try to disprove new theories, following and embracing the null hypothesis. We have an idea that customers are behaving a specific way at this step of the process. We put the hypothesis out there. To be pretty sure we’re right, we need to try to prove we’re wrong as exhaustively as possible, until there’s no other conclusion but that we’re right before we move on.

It’s easy to say ‘we’re all scientists because we come up with hypotheses and measure things.” But if we’re not really living and breathing the principles of the scientific method, things get sloppy.

It’s really important to state your hypothesis clearly as your first step in building an experiment. Otherwise, it’s easy to convince ourselves we already knew something the data shows. At the neurological level, the key to learning is surprise. You form new neural pathways through surprise: through seeing something novel, the brain has to adjust to this new information.

That’s part of being honest with ourselves. We live in a business culture that developed out of the industrial period at the beginning of the 20th century, and a lot of our corporate cultures were formed around Frederick Winslow Taylor and his top-down ways of making decisions and organizing things. However, the methods that really work for powerful and effective product development tend to be less top down and more collaborative.

In most business cultures there’s still enormous tacit pressure to be right, especially as you go up the chain of command. We’re afraid to hypothesize: we have to know things, and they have to be true. That isn’t how the world works, it certainly isn’t how good product development is done, and it takes enormous courage to ask more questions and be honest about what we don’t know. It takes practice to model this.

Good product leadership comes from a place of genuinely being curious about the world and in some way wanting to improve it. We’re all in this to build products and services that make people’s lives better, so the first step is to tap into that sense of purpose of why are we doing this in the first place? I find to my delight that product development folks aren’t just trying to make money and achieve material success but actually care about the problems they’re trying to solve.

Grounding yourself in that sense of purpose opens you to being more curious about the world, and sharing that growth mindset with others. Then you can articulate what you want to learn more than what you’re already right about. If there’s anything leaders need to model, it’s asking more questions and doing more listening and a little less talking.

You said teams tend to be unaligned on metrics. Why?

All these issues roll up into the problem that nine-tenths of the organizations I run into face: they don’t have a coherent strategy. There’s a lot of talk about strategy, ‘we know where we’re going and how to get there.” but if you ask people on the ground floor what the strategy is and whether it’s been clearly communicated by leaders, there’s almost never much clarity. If you can’t have a new employee describe the strategy from memory in a couple of sentences, it’s too complicated and it probably won’t work.

In particular, that makes it hard to come up with right measurements. If multiple teams are supposed to be collaborating to build systems that work together, they can easily be rowing in different directions. This lack of alignment is the most insidious and difficult to change problem with measurement.

So what do we do about all these problems?

Let’s start with the last one, because it’s so important to align our teams on a clear strategy. Only then can we make appropriate tradeoffs, because we have to respect the limits of our capacity as an organization: we only have so many people, resources, and time, and we can’t drive people into the ground and expect good results.

What’s hard about strategy is making the decisions required to come up with a good one. This consists of three parts. At the top, you need a clear vision of where you want to go: what is the objective for this initiative? You have a product, a certain market segment you’re trying to capture, a value proposition you’re trying to deliver. Your strategy would be: What capabilities do we need to reach that goal?

It’s important to keep the number of capabilities as small as possible, because the more interoperable capabilities you need, the greater your chance of failure. So you have to choose what not to do, what market segments to ignore. Your strategy should be: capture this market segment first, then once that’s in place we can expand to others.

Next, you can draw your high-level metrics from the capabilities you need to build, and then individual teams can draw more tactical metrics from those high level metrics, down at the component level.

All this requires difficult choices. We have to push ourselves to be as minimalist as we can in our strategy and try to keep it clear and simple rather than trying to be all things to all people. Life is a series of tradeoffs. There’s a great book by Greg Mckeown, Essentialism, about reducing clutter in life and business, and one thing he says that’s great is: if it’s not a clear yes, it’s a clear no. We have a hard time saying no to things that are asked of us by our bosses, peers, and colleagues, but it’s really important to realize that success comes from narrowing down to the few things that really matter and leaving everything else out.

That’s scary. It takes a lot of bravery and conviction, but it helps if you do it together as a team and have the team’s trust. Then, everyone is committing to the same explicit tradeoffs, and we can hold ourselves accountable once we’ve decided. I’m working with a company that has an SaaS product that’s really well suited for small business. Leadership thinks it would work well for enterprise, which would require going upmarket and making it a much bigger, more robust, complicated tool. I’m cautioning against shifting too quickly upmarket, because they haven’t yet ironed out all the wrinkles in their current market. It’s dangerous to try to capture an adjacent market segment if you don’t quite have your product management fit right in the segment you started with. You muddy your message, customers are confused about your value proposition, and your product doesn’t quite meet anyone’s needs because it’s trying to meet everyone’s.

How do you tell your boss: no, we can’t add that feature now?

The best way I’ve seen is to ask questions as a way of opening up all the data that’s available, and try to collaborate with your boss and say, here’s our capacity, let’s figure out together how we can succeed.

Don’t try to make your case only with data. People don’t make decisions based on rational data: they decide what they want and try to find data to back it up. We like to believe we’re rational but it’s been proven otherwise. It’s more powerful to try to build empathy with your boss by asking questions about their priorities, what are they trying to accomplish vis-a-vis their peers and the organization. Approach them with curiosity and openness, as a supporter and ally.

I’m not saying you should never say anything bad, or to be weak and squishy. You should have a firm opinion: bosses tend to respond better to respectful clear direct feedback than people who always yes.

People think being a boss is giving orders and having people follow. You get further if you think of bosses as people with particular needs, goals, frustrations, and pains. And they’re usually pretty open about those if you approach them with genuine empathy and curiosity. Building those relationships is important to getting alignment.

How do you quantify your true capacity?

We’ve known all the way back to Deming in the 1950s that an organization is a system with a fixed capacity. There’s only so much it can do, and trying to force more raw material through it only gums up the works, locks up teams that depend on each other, and causes breakdowns. So we need to figure out what our actual capacity is.

I put this into two categories. First, product or service metrics that focus on the customer behaviors we’re trying to influence. Second, delivery metrics: how well are we collaborating to build systems, products, and infrastructure to facilitate that flow of value to customers?

Let’s consider those buckets in more detail, using this diagram inspired by David Bland about product and market fit:

mcafee-1

You might recognize the acronym AARRR as Dave McClure’s “pirate metrics.” These are categories by which customer traverse the funnel: Acquisition through Activation, to Retention, to generating Revenue and Referral. This is useful in choosing metrics.

When a customer arrives from a channel to your product in Acquisition phase, they have a pain point. A product is successful if it solves an existing customer pain point for an existing customer segment in a sizable reachable market. All that needs to be true. Acquisition is a big piece of that – and I’ve also written “cost of customer acquisition,” because it’s paid marketing that brings the customer in.

There are an array of metrics for measuring acquisition properly, depending on where you’re at in the product development lifecycle. As you begin to build a new product or service, you’re typically still trying to figure out if you have a customer problem worth solving. You’re mostly focused on the question: can you acquire anyone at all?

Once you’ve figured that out, and you’re putting your solution in front of people, you’re in the stickiness phase: does it actually solve the customer’s problem? Now you’re looking more at retention.

Only after you’ve nailed these phases is it worthwhile to pursue growth by encouraging referrals. Many people make the mistake of trying to go super-viral without first testing whether their solution actually solves the problem. With enough budget, it’s easy to acquire a ton of customers. It’s also very easy to lose them through a leaky funnel if you haven’t nailed retention.

And notice my little note in the corner, PMF = CAC < LTV. Product Market Fit requires that Cost of Customer Acquisition be less than the customer’s Lifetime Value. But, as every startup knows, with new products or services, we have no lifetime value: it’s all cost of acquisition. It takes awhile for that ratio to shift. That’s one way to sense whether your product is becoming successful: if your lifetime value grows and your cost of acquisition drops and you move towards a healthier ratio. If that doesn’t happen before you run out of money, you’re dead in the water.

As for delivery metrics, there are a dizzying number of ways to measure that. I think the single most important metric is an old one, cycle time. This chart is actually pretty straightforward.

mcafee-2

Cycle time has been around a long time. It comes from manufacturing, but it’s useful for many workflows.

If your team has a list of tasks it must accomplish to get a product built or add capabilities to it, most people organize that into some kind of to-do list, or backlog, or user stories, or features, or whatever you call your units of work. If you capture a time span from when you pick a piece of work off the stack and the team finally ships it, that total elapsed time is the cycle time for that unit. As you capture these units with timestamps, you develop an average cycle time.

This control chart shows a timeline along the bottom, in months. The vertical axis is elapsed time for each dot. Whenever you complete a task, a dot gets plotted vertically based on how long it took. As this grows, average cycle time appears. And the inverse — mathematically equivalent and also useful — is throughput: the number of items your team can complete in a given time period.

Knowing cycle time helps you have that conversation with your boss. “Hey, here’s how much we’re generally getting done. The historical data speaks for itself: this is our capacity. Just forcing more stuff down the pipe won’t work.”

This control chart comes right out of Jira, and most teams don’t even know it’s there. When I talk to teams, I show it to them, and say: here’s how much work you can actually get done, so let’s be realistic about your capacity.

You also need to know general health metrics around your system. Maybe that’s web uptimes, or various quality and availability metrics. Is the product reliable? Are you responding to customer requests in a timely manner? How quickly do you work on things that customers request which actually fit into your strategy – in other words, how long does it take to deliver things customers want and will clearly pay for?

You might wonder how this compares to a burndown chart. When we talk about burndown or burnup charts, we’re talking about velocity in an agile world. Teams take a stack of work and estimate how long it’ll take to finish each task. Each task gets an estimate, and typically teams use the abstract value of storypoints to get a relative sense of each task’s difficulty. As teams pull things off the stack and work on them, the definition of velocity is how many storypoints can get done in a sprint or iteration.

Cycle time is a more direct measurement, and what’s really powerful is: it makes estimation of storypoints irrelevant. Instead of relying on engineers or other product developers to gradually calibrate towards accurate estimates of how long tasks will take, you use historical data. The control chart doesn’t lie, it’s not human generated, it doesn’t need calibration, it’s just calculating an average across completed units that’s pretty irrefutable. It cuts to the chase and removes much of the debate about what you can and can’t accomplish.

Monetzation Monitor icon

Revenera’s Monetization Monitor

Our latest research reports explore trends around software monetization, usage analytics, piracy, and compliance.

So what are your overall takeaways?

First, get really clear on the vision for your product and organization. Maybe it’s already there: that’s great, but make sure the whole team understands it and can talk about it in their own words. If it isn’t already there, start now. That’s senior leadership’s key job, and if it’s not getting done, they deserve clear and honest feedback. If they spent time at an executive retreat and came up with a strategy but nobody knows what it’s about, they want to know that.

Next, figure out what you need to learn and how you’re going to measure it before you let your tools drive your measurement decisions and behaviors. It’s worth investing time upfront to figure out the right measurement for an experiment, rather than just using what’s conveniently at the top of the dashboard.

Embrace the null hypothesis. Use your colleagues and team to keep you accountable that everyone’s open and honest about what you know and don’t know. Challenge yourself to prove yourself wrong before you move on to the next step.

And finally – this is key – do one thing at a time together as a team. People try to do too many things at once and get nothing done. Keeping work-in-progress as minimal as possible will accelerate your throughput.