The Problem
It may be different in a small start-up or a firm
well-organized into small, highly integrated business/technology verticals, but in a typical
large corporate enterprise with matrixed silos, it is quite challenging for a Product Owner to
speak for all of the stakeholders on a project.
- The PO must be able to speak fluently to the technical people on the team
- The PO must be able to provide one reliable set of priorities which reflect acquiescence among all of the PO's horizontal peers in the organization, plus their direct reports. If there are competing priorities among those peers, creating and maintaining this uber-backlog will involve some serious facilitation chops.
- The PO must also be able to speak authoritatively within the funding hierarchy and hierarchies in which the project finds itself. The "funding authority" is likely an executive with her own relationships to maintain.
- The PO should also be a user of the software and be able to speak to the full user experience; the PO should be able to conduct UAT.
- While they're thus occupied, POs must also be steadily (if not continuously) available to the team.
Moreover, on teams that believe there should be life for a project before iteration
1, the PO somehow has to make all of this happen at a high level during a quick 2-4 week
"inception" period, and then keep the team in touch with the right subject matter
experts as the project proceeds. How can any one person actually do all
this?
Spikes
For those of you who aren't already
familiar, a "spike" or "tracer bullet" is a short piece of work within an
agile project in which one or two programmers may be assigned to do outside of the iteration
structure. The spike investigates unknown technology problems well enough so you can
estimate them. There's a nice explanation in this blog post about these related concepts and their
origins.
An example would be that the team discovers it needs to use a new visualization
technology for a planned dashboard, but nobody on the team has ever used the technology
before. So the team simply doesn't know how long the work will take related to that
new technology. At the point where the team urgently needs to know the details about how
hard it is to work with the new technology, the group agrees to send a pair of programmers off
for a pre-specified amount of time (a "time box") to learn enough about the new
technology so that the project work can be estimated.
Any work the programmers do in this
spike typically will be thrown away--the spike produces team learning, not reusable
software. Once you know how long the tasks related to the new technology are going to
take, you can make appropriate decisions about what to include and what to postpone from your
current planned release, compared to other features which have already been estimated. You
adjust the backlog accordingly, and the team moves forward. Conceptually:
Design Spikes
Dutch, who has brought many years of
product management experience to Thoughtworks, points out that on a product-oriented team, you
may be as likely to need a "design spike" as a "technology spike" before you
can complete backlog grooming, or even complete a story that is currently in play in an
iteration.
In this case, the developers may or may not know how to write the software
behind the story, but what is very clear as you talk it through in the team room is that nobody
knows what the desired user experience should be for the software. What do you do?
You take the story out of play for the current iteration, and hold a workshop for the PO and any
SMEs who can speak for customers, or even customers themselves, and determine what the user
experience should be. This could be expressed as a wireframe or a photograph of a white
board--unless the question is specific to the design at the CSS level, you may find it more
helpful to come out of the design spike with team learning about the desired user experience,
not a complete web page template.
Note that in an idealized "Continuous
Delivery" project, every iteration actually calls for a software
implementation of a design spike, and so-called A/B user testing determines what your next step
should be by measuring the way actual customers use the software. Notice what happens
here--if you're working with known technology, you may do an entire project without doing a
software spike, in the standard sense of the word. Even though some of the software gets
thrown away, you're always building the real software, not a throw-away architecture.
But from a design perspective, continuous delivery is nothing but a set of design spikes which
result in team learning, and throw off the software itself as a
side-effect.
Value Spikes
So let's take the spike
concept back to the poor, overworked Product Owner stuck doing a vendor workflow implementation
for internal use at a very large company. This thing is not being ported to an iPad any
time soon.
The team is all sitting around in the team room talking about the new dashboard you're implementing
in this sprint. As it happens, the developers are not familiar with the visualization
technology, and they are eager to go off and have a spike to figure it out. And goodness
knows they deserve it--all they do is write integration code all day long. But
wait--before two lucky programmers run off to play with something new, it turns out that the
team stops to ask the Product Owner what the value of this dashboard is going to be. It
seems like a lot of architectural investigation for a product that is going to use 3D to
visualize work orders going from the "pending" state to the "done"
state. Why, you ask, is 3D necessary for this? What is the value?
Most likely
the Product Owner does not know, in this environment, how the team ended up with a request for a
3D workflow state change visualizer. There are a lot of players, and there's a lot of
politics, and it's a big project, and this is only one of a thousand requested
features.
This is where the "value spike" comes in. Just as you should
stop work to get a general idea of the effort involved in specific stories, or the user
experience required, you should also stop work to allow the Product Owner a time box to assemble
the right SMEs for a meeting to determine the authorship and impact of a requirement whose value
seems questionable. The PO does not have this information top of mind any more than the
developer has information on every possible technology ready to hand.
In this case, the
PO will return within the time box with a fresh view of the value of the feature, and just as
happens with a technology spike, the PO will do a new cost/benefit analysis based on the value
the feature will bring compared to the cost it will take to develop it, and modify the project
backlog accordingly. It looks like this:
Design and value spikes should be tactics that every Product Owner keeps handy. You don't have to be omnipotent if you have a technique that lets you become expert on one little piece at a time. And that's as close as we get to Enterprise Fun these days.