Stack Overflow co-founder Jeff Atwood has noted that he’s “a big fan of developers handling tech support calls for their software, at least a part of the time. It really motivates them to address the pain points users are experiencing, because they become shared pain points.” There is tremendous value in hearing firsthand how your users are engaging with the applications you’ve spent so much time developing, but you also need a scalable and objective way to reveal these insights.
As a developer, you face too many requests from diverse stakeholders, have too little time, and often too little data to help you validate priorities and build the best applications for your customers. Throughout the software development lifecycle you face key questions:
- What problems are users trying to solve?
- Which features are most actively used?
- What OS platforms and hardware architectures are they using?
- How many customers are still using legacy functionality?
- Where should I focus QA efforts?
- What issues are beta users experiencing, but not reporting?
- How do I move users to the latest version?
- Are customers using the features they told us they couldn’t live without?
- How should I prioritize bug fixes?
Consider the benefits of software usage analytics, then, in answering these questions. By collecting and analyzing anonymous environmental and product usage data you gain actionable insights that help you make more informed decisions based on facts. Let’s look at one example from the release stage to see how this works.
There’s always a flurry of activity and firefighting when you’re getting ready to release a new version of your application. Should you spend your time fixing a bug or should you finish development on a new feature? Which features will ship and which must wait for the next release? How should you prioritize test environments and functions? Which exceptions are occurring? Are there patterns to failures? You need data to back up your gut feelings and address the opinions of competing stakeholders.
By tracking feature usage, you can measure the relative importance of those features and assign resources accordingly. If you’re conducting A/B tests, usage analytics provides you with automated feedback so you’ll know what works and what doesn’t in real time. Compared to combing through support requests or survey responses from a small sample of your users, you can make more informed decisions much more quickly.
By tracking environments, you will know that only 3 percent of your users are on Windows XP Service Pack 2 and that fixing the performance issues on that platform can wait until after release before they are addressed as part of your regular update process.
By tracking exceptions, you will have insight into which functions are causing problems and how many users are being impacted. Exceptions are reported immediately – no more waiting for customers to call support – and you have quantitative data on their effect on your user base. Now you’re prioritizing resources on the most important issues and addressing them proactively.
Curious about other areas of the development lifecycle? Learn more about how software usage data can help you make data-driven decisions in our webinar with SD Times’ David Rubinstein.