Revenera logo

Revulytics (now part of Flexera) sponsors a series of Product Management Today webinars featuring innovative ideas from top software product management thought leaders. In these blog posts, we ask the presenters to share their insights – we encourage you to watch the full on-demand webinars for even more details.

Software management consultant Johanna Rothman, a.k.a. “The Pragmatic Manager,” identifies five product traps that can reduce the effectiveness and value of your agile projects – and offers actionable solutions. For even more on Johanna’s approach to agile , be sure to check out her book, Create Your Successful Agile Project: Collaborate, Measure, Estimate, Deliver!

Many agile projects don’t work as well as we hope they will. Why?

I often see five major agile product traps:

  • Velocity for estimation
  • Software teams as feature factories
  • Teams that violate iteration boundaries
  • Delivery without discovery, and
  • Roadmaps show certainty

Let’s start with velocity. That’s a measure of capacity, not productivity. Your team might be a full cross-functional team that can do 37 story points inside an iteration. That’s great, but it doesn’t mean anything. Story points aren’t equivalent to stories completed.

Velocity is also a function of the team you’re on: has it learned how to work together? And it’s a function of the domain.

Story points hide complexity. So there’s the essential complication, from Fred Brooks’ Mythical Man-Month: you might be trying to solve really difficult problems. When I was a developer working with those old cameras with 8- or 16-bits of grayscale, that problem had essential complication.

We also have accidental complication: technical debt, stories that are too large, too much code, insufficient test, all that stuff. Story points try to estimate the essential and accidental complication, but they don’t actually help do either.

To use story points, you must have technical excellence. You must finish all the features because that’s how you know the code is clean enough to use. And if you’re like many of my clients, you don’t have enough automated tests to know you’re using technical excellence.

Instead of velocity, consider using cycle time for estimation:

rothman-how-to-use-cycle-time-for-estimation

In this chart, the team was using 10-day iterations. They started Story 1 on Day 1, and finished on Day 3, with a duration of two days. They finished five stories inside 10 days with 2.4 days of cycle time.

That level of precision isn’t necessary; I’d round up to 2.5 days. Am I allowing the work to expand to fill the time available, as in Parkinson’s Law? Probably not. Almost every team has extra work that isn’t included in the stories, so it’s important to understand: beyond time for the stories, do we need to add support, or increase test automation?

Those might be separate stories included here, and what’s nice about using cycle time is once you start measuring it, you can say, oh, this is normal. We don’t have to estimate story points, we can just say, what are the next 5 stories?

Cycle time also has a virtuous, positive feedback look. Everyone starts to say: Could we have made that story smaller? Could we have split it into two or three stories, to get something out faster?

To estimate, think about minimums:

  • Minimum Viable Experiment (MVE), to learn
  • Minimum Viable Product (MVP), to release
  • Minimum Marketable Feature set (MMF)
  • Minimum Adoptable Feature set (MAF), so you can move people to a new product

Then, create small stories from those minimums, so the team can really finish a story at least once a day.

I like cycle time as feedback for the product owner and the team. It helps everybody work together – not necessarily faster, but better. Feedback on the stories helps you figure out what to do next.

The next trap you identified was “software teams as feature factories.” How does that happen, and what’s the problem with it?

Many product owners fall into this trap, and it comes from their managers. The problem is, you might say, oh, you can just do these features. They don’t need to work together; it’s not like we have an iteration or release goal, we don’t need a project, we’ll just have the team doing features.

But software product development is a form of learning. Whenever you have innovation, the team needs to learn together, with the product owner, so you have a chance of delivering the right stuff at the right time.

The first tip to prevent “feature-itis” is: don’t dig your hole even deeper. You might know the broken window pattern. When New York’s mayor wanted to reduce crime, the first thing he said was: we will fix broken windows in empty buildings.

If people see it’s OK to have brokenness, they don’t finish things. We often run to finish a feature, but the feature really is not done. So how do we build the cost of addressing technical debt or incompleteness into each feature? If you’re using iterations, leave room to pay it off.

One of my clients created three ranked backlogs. This allowed senior management to understand: we have a feature backlog, a technical debt backlog, and a defect backlog. They used cards on the wall, so people could see how many cards in each backlog – I recommend no more than 10 each. With cards, you could say: what do we do first, second, third? Do we need to focus on two or three features, a few areas of technical debt, a couple of defects? When you do this, and take a more holistic view of the product, everything goes faster.

How do teams violate iteration boundaries, and what should we do about it?

Teams often can’t complete within the iteration. One reason is: you often see iterations of waterfalls and insufficient product thinking.

Let me show you what an iteration of waterfall looked like at one team.

rothman-waterfall-masquerading-as-agile-in-a-timebox

They had a 10-day iteration, but it had phases. They spent Day 1 defining requirements. Then another three days on analysis and discussion. They’re four days in, and no code yet. They would have been much better off with a Minimum Viable Experiment (MVE) to help understand what the customer really needed.

Then they spent two days in design: still no code! And then they finally spent two days coding.

The next problem was, they worked across the architecture. Platform people worked across the platform, middleware people worked across the middleware; app layer worked across app layer, and UI people worked at the UI level. Now they had code but it wasn’t integrated. So they spent the next two days trying to integrate it. Not surprisingly, it didn’t all work. So the testers were supposed to stay late at night on Day 9 and into Day 10. And – no surprise – they didn’t finish all their testing.

The key is: how can we get a little code on the first day, a little more code on the next? That’s how you know things are working, because otherwise teams will violate the iteration boundary.

Part of the problem was how their team was organized. There was an “all-on-high” architect who deigned to work with the team on Day 1. There was someone called a product owner, but the product manager gave the owner an enormous list, and said: the team must do all this in this iteration. So, strategic planning and managing roadmaps wasn’t separated from managing backlogs.

Product owners really manage backlogs: what can the team do in this 10-day iteration? And yes, they need to talk to product managers to understand strategic goals for the next few months. But then the product owner should work with the team to create small stories.

I’m not fond of use cases in agile, but it’s fine to have them as long as you realize you won’t be able to implement all the paths in one iteration.

Another problem that team had: it was lopsided. It had one UI person, two platform people, one middleware person, and two app layer people. Sometimes the UI person was part of testing, sometimes the product owner was, but the developers never were.

Why didn’t the developers test? When I was a developer, I was extremely good at not testing anywhere I had bugs. We are blind to our own defects, and a lot of developers wrongly use that reasoning to avoid testing their own code. But if you test the feature as a feature, and especially if you’re doing API-based testing, then developers can test their code.

By defining all requirements, analyzing and discussing so everyone understood what was going on, they were trying to do the right thing. But they hadn’t sliced their stories small enough to do just a few things through the architecture, and testing was a disaster.

A related problem is insufficient product thinking. If a team has a lot of work in progress and it depends on other people, and people start their own stories, they don’t get the value of agile. With each project, you should increase your capabilities over time. If you have a goal that says, here’s how we’re achieving this little piece of the iteration goal, and then that piece, you can actually get to a release.

One valuable thing about an agile approach is you can release daily, but you can release internally at least at the end of the iteration. With flow, you can release any time you finish a feature. So it’s really important to say: how do we work as a team, and not have everyone starting their own story? How do we find the people we depend on, so we can work with them to finish?

But the more work-in-progress you have, the less product thinking people can do. They’re so focused on finishing their work, they’re not thinking about: how do we get the right product capabilities inside this iteration, so we can show people the value we finished?

Here’s what I mean by working in features instead of by architecture. If teams work across the layers of the architecture, then they have all these curlicue features, where one feature depends on other people and teams in platform, in middleware, in app layer, in GUI, in API:

rothman-work-in-features-not-by-architecture

Instead, we want small, coherent features that work through the architecture. That works better. But how do you move from curlicue features depending on 14 teams to one team building small, coherent features?

Get all your teams into one physical or virtual room. Say: we need to move to implementing by feature, not by piece across the architecture. With all the interdependent teams we have now, all people can say is ‘I did my part,’ but it isn’t until we actually release into the code base that we know if we actually have a feature. How can you organize yourselves to reduce these interdependencies?

When you pose the problem that way, teams often say: well, these four people can work together with a couple of testers, and these other five or six people, because some features are thicker than others.

Beyond asking teams to reconfigure themselves, think again about small stories: what can teams complete in one day or less? The smaller the story, the less complexity and interdependency, the easier it is to create all the test automation, and change the build if necessary.

I talked about Minimum Viable Experiments (MVEs): that goes into a sprint goal. Maybe you have a sprint goal that says we’re doing these kinds of experiments, we don’t have to wait to the end of the release. We’re working on increasing the product’s relevance with every iteration.

I ask teams to limit work in progress. Product managers and product owners can do that, too. You can say: I know we have ten stories in this iteration and you think you can complete each in a day or less, but I’d really like to see these two first. How can you do that?

Then, clarify your acceptance criteria. Say: we want this honest-to-goodness finished. We want all the test automation, all the build automation, all the defects you know about addressed now. That really helps the team finish one or two things at a time, and go on to the next.

You said another trap is focusing on delivery instead of integrating discovery. Can you explore that more, given that we do have to deliver?

The valuable thing about agile approaches is we can integrate learning as we go. We can not only ask how are we doing on the project, but also how are we doing with the product, and what can we learn about our process? Every week or two, we get to reflect on all of that.

To support this, again, I like the idea of MVEs: what’s the smallest idea we can learn from? Too often we don’t integrate MVEs into our feature sets. We should explicitly say we need small ideas we can learn from, and if we call them MVEs, now we have a name for it.

We might not be able to expose MVEs to all of our customers. We might have to do A/B tests, or release to a limited group via beta, or expose a particular part of the product and ask people to try it, and get data about what they’re doing. You might do a prototype inside the organization, and then turn it into an MVE to learn from the customer about how to create something that’s useful.

One step up is the Minimum Viable Product (MVP): what’s just enough for the customer to use, even if it doesn’t contain full functionality? So if you were building an email client, an MVP feature set might be anything you need to do to send and receive email, including BCC: and CC: and attachments – just the email itself, not its routing. I’m not saying that’s sufficient, but a customer could use this.

Then, moving up, if we want a Minimum Marketable Feature (MMF), maybe that’s all of what we’ve described plus you can send any size email. Or you can transcribe attachments to audio, or to text.

Now, what does the customer need to do her work: the Minimum Adoptable Feature set (MAF)? One bank said: we’re trying to build enough so our people can migrate away from our old backoffice application. Before we turn off the old product, they need to be able to do what they’ve been doing.

When you think about feature sets in terms of minimums like these, it helps teams – and especially senior management – understand that we’re doing discovery as we go. We won’t finish the feature set in one iteration. We’re not trying to imply we can. We’re saying: here are steps towards achieving them.

And then I want to monitor our progress. For example, if you’re working towards a Minimum Adoptable Feature set, you might have a bunch of stuff waiting for release, because we need an entire set of features before we can release any of them. You can ask when will we do it, and what have we released since our last meeting? And if you use a product backlog burnup chart to show the cumulative features you’re finishing. It might show that a feature set grew over time.

You said most roadmaps show too much certainty. How do you express uncertainty in a roadmap?

In a typical roadmap, everything’s lined up neatly on a quarterly boundary. It’s beautiful, but it hides four common incorrect assumptions about roadmaps. It assumes:

  • All feature sets have an even distribution of features
  • The value of all feature sets is similar
  • The features are roughly the same size

It also assumes the arrival rate of features is predictable. But the more we integrate discovery into our delivery the less certain we are of that. We might learn we have more work to do, or less.

Teams can’t properly estimate what they’ll finish in one quarter. And especially if you’re using agile, you want to integrate discovery even in quarterly planning. A scope-based roadmap is one way to do that:

rothman-scope-based-roadmap

It’s still a wishlist, but we have Minimum Marketable Features (MMFs), and we have “Enough for a Release” with a big black line. This says, yeah, we’re pretty sure we can do these. And if we finish with the stuff above the line, we can pull more MMFs in this rank order.

Monetzation Monitor icon

Revenera’s Monetization Monitor

Our latest research reports explore trends around software monetization, usage analytics, piracy, and compliance.

In this example, Q3 shows a large bunch of MMFs, and nothing under the black line. This tells us we’re in danger of pushing too much stuff into this quarter. The more work we push into a timebox, the more we risk having delivery without discovery.

Charts like this help you do enough to still have value, and still release, but show people you’re uncertain. Realistically, anything you’re planning beyond a month or two will change.

Too many managers and product owners try to push work into an iteration by asking how much can we do instead of how little? When we say how little can we do and still have enough for a release, we have more room for discovery, and less need for estimates, at least at the beginning.

To help people understand that roadmaps aren’t certain, ask these questions:

  • How little can we plan?
  • How little can we discover before we re-plan?
  • Do we need a strategic look or is this tactical?
  • How long is our rolling wave plan, and is it working for us? (A rolling wave plan of a quarter is too much for me and my clients.)
  • Finally, do we need more or less planning and re-planning? (To integrate discovery into delivery, you want to plan more often but plan smaller stuff.)