This week, I had the opportunity to speak in the Agile Practitioners 2013 conference. The topic of the talk was Product Roadmap, Planning and Launch in an Agile Environment.
The talk was around approaches to modern product management, and specifically considerations due to agile methodologies and short product release cycles.
Fundamentally, old-style product management assumed software releases are done infrequently, something along the lines of this diagram:
Whereas modern product cycles rely on shorter cycles, something along the lines of this diagram:
The assumption in modern approaches is that the road to good software is shorter when making smaller steps and frequent turns than when making large steps and more radical turns. (This is geometrically true in the diagrams…)
Old-style product cycles consisted of three main steps: planning (negotiation, prioritization, scheduling), development (design, coding, testing) and launch (alpha/beta, release, outbound marketing). The main question I was trying to tackle in the talk was how the corresponding activities map to product cycles with frequent releases.
On a side note, some organizations use old-style product cycles (infrequent software releases) while using “agile development” techniques internally (that is, frequent internal releases). While perhaps better than nothing at all, this approach misses—in my mind—much of the benefit in agile software development. In the end of the day, the biggest benefit is adapting to customer feedback, and without the software reaching real customers, value diminishes.
The areas I was trying to tackle in the talk were:
- How does planning occur in an environment when there’s no defined period for planning (“beginning of the release”)? When the working assumption is that many of the details (and associated effort) will be revealed during the development process. And, how do roadmaps look in such an environment?
- How do product launches occur in an environment when there’s no defined period for launch, but—instead—software is ready in chunks? How and when does customer feedback get incorporated into the cycle?
- How does one integrate new approaches and opportunities brought about by agile development? Mostly, agile approaches facilitate experimentation through proof-of-concepts and such (with various variants such as MVP, MSP, and lean).
Here are some of the practices we’ve come to follow over the years:
Our planning cadence at Webcollage is as follows:
- Annually: high level priorities for the year and a straw-man product framework. We keep a lot of slack, which grows as we get further from where we are.
- Quarterly: review priorities again, adapt for the upcoming quarter. We still keep slack at around 50%.
- Ongoing: reprioritization using standard agile techniques—wish lists, backlogs, iterations, …
- We present external roadmaps to customers in a way that reflects our high level framework.
- As part of the roadmap, we do not normally commit to specific features and timelines. We’ve come to realize that hard commitments directly reduce our degree of freedom, or our ability to be agile. This in turn limits our ability to innovate and bring more value to all of our customer base. (This was a heated discussion at the talk; to some people, the mere idea sounded like science fiction.)
- We release software every two weeks.
- We hold a short weekly meeting (up to one hour), which involves the leadership of many areas of the company: Products, R&D, Professional Services, Pre-Sales, Product Marketing, Operations, Technical Services, Technical Support. During the meeting we review noteworthy features in the last iteration and in the upcoming iteration, and identify follow-on action items and tasks (around launch, rollout etc.).
Very rarely can features be completely ready in one iteration. For one thing, creating product documentation requires a working product, which is only there at the end of each iteration.
I spent some time during the talk to present Feature Flags, or the ability to turn features off or on (oftentimes in the production environment, post installation). In our environment, we often roll out features incrementally: start with internal users; then, open to select customers; we then may open to most customers except ones whose day-to-day work may be affected, and ensure we communicate with them properly; then, we turn the feature on for all customers; finally, we remove the old behavior. (This topic, too, yielded some heated discussion around the potential need to support a large number of configurations—an issue we did not encounter as of now.)
As I mentioned in other posts, our methodology is based on Kanban and facilitates “open iterations”. In other words, we allow customer (and internal) feedback to enter the current iteration. This reduces predictability with respect to new functionality, but increases the speed at which we are able to adapt.
Beforehand, when shut down iteration content to new requests (as is dictated, for example, by Scrum), we ended up having an odd-even syndrome: because it took a few days for feedback to be received and analyzed, it was only handled in the following iteration.
With our current Kanban-based approach, we can schedule issues resulting from customer feedback even if an iteration has started.
Proof of Concepts and Feature Depth
In the old days, product managers had to be gamblers. They would gather feature requests, and (all processes considered) essentially gamble which features will be successful. By the time you’ve launched the next release, everyone hoped they were right. (And in many cases they were not, as is evident from the adoption rate of Windows Vista, for example).
Nowadays, proof of concept releases have become a standard business tool. Variations of the concept have different names including 3 L’s (Launch-Listen-Learn) to MVP (Minimum Viable Product), MSP (Minimum Sellable Product), and a few more.
We’ve found out that the conventional focus on “user stories” misses the point when it comes to proof of concepts. While indeed a user may need to accomplish a certain task (hence a “story”), the issue is more about the “depth” at which a feature (or story) is implemented.
Clear agreement and communication of the feature depth (i.e., level of completeness, robustness and finesse) help keep everyone (and especially coding and testing) on the same page. When the feature is deemed successful, one can and should iterate on depth, improving completeness, robustness and finesse.
All in all, I don’t believe there’s magic in managing product lifecycles in agile environments. Unfortunately, many of the old-style practices aren’t optimized for this environment; and, many of the tools are too new to provide a true end-to-end solution. My goal was merely to share one company’s know-how to potentially increase other companies’ confidence in moving in a similar direction.