Continuous Delivery Cannot Succeed Without Business Analysts
What is the role of an analyst in Continuous Delivery? Does the concept and practice exist only in development and DevOps? If not, then what do analysts have to do with it?
To answer these questions, let's first think about what Continuous Delivery represents: it is about building quality right into your product or application; it is about enabling more frequent and reliable releases, shrinking the ceremonial aspects of releasing to the production environment to the point where you are always ready to release quality software that is fit for purpose.
To date, so much has been talked about in the development and deployment space; one might think that the business and analysis counterpart has been somewhat overshadowed. So, just how would analysts apply these principles in their day-to-day working environment? The notion of operating in a continuous-delivery fashion may seem daunting at first, but it needn't be. Here are a few principles that we should look at:
#1 Build the Right Thing by Building Feedback
A major aspect in the analysis discipline remains in backlog management. This is great if you are in a situation where requirements do not generally change all that much. The same approach, however, does not lend very well in situations where you need to change the direction of your application or product based on the outcome of feature validation. For example: imagine you have to maintain an iteration's worth of stories ahead of development at all times, and then imagine you find out that you have to scrap your buffer as well as your backlog many times over, when a specific feature validation has changed 30% of the assumptions you had made earlier. You would probably feel more than just a little deflated by the end.
To remedy this, we need to call for an approach that has a much shorter feedback cycle - one that will leave us with less guesswork, and lowers the cost (and pain) to adapt. The scope of your features should include measures that will tell you sooner whether you are on the right track, and how you should go about validating the assumptions you have made about the rest of the feature roadmap. You could view your epics and stories as a set of assumptions that will need to be validated; each time you do that, you are learning more about what success means to your product, incrementally moving towards the path of building the right thing.
Let's illustrate with examples to compare and contrast the approach. Imagine you are brainstorming a feature to allow users to sign up for an online magazine subscription. The scope includes these parts:
Feature: Allow subscribers to sign up for a Weekly or Monthly Subscription
- Send weekly-aggregated content to subscribers
- Send monthly digest to subscribers
- Get subscriber contact and payment information
- Confirm order via email
- Analytics
Example 1:
Taking the above, what you might normally do is choosing to deliver the weekly content first. You also find that you would need to curate the monthly version before it can go out to subscribers. So you might plan to develop your feature in these increments:
First release | Second release |
|
|
The first release gives your subscribers a weekly subscription and process payment; the second release allows you to enhance the service by selling another product, as well as adding a feedback mechanism. In this case, you are still assuming that you plan to build the monthly digest part, and then get some feedback to learn about how well you did afterwards.
Example 2:
But what if the above laid-out plan is a luxury that you do not have? Now let's switch gears so you can get feedback sooner. Same context, but consider the following:
First release |
|
As of the first release, you have not committed to build the weekly or the monthly version yet, or indeed to build one at all. You are testing to validate what your customers prefer before investing the resources to develop. At this point, you have no other requirements sitting idle in your backlog either because you just do not know what the outcome would be; your next step will be determined by what the analytics tell you. At the end of the time-box, you find that most subscribers prefer a weekly subscription. So you plan out the next release like this:
Second release |
|
As of the second release, notice that you have completely discarded the idea of building an editor tool to curate the content for a monthly digest (from Example 1), and there is no need to scrap the now-obsolete set of requirements to do with the monthly digest. These are your saved costs - the financial costs of building something that you do not need, and the opportunity cost of using the time in delivering something much more useful to the market.
#2 Think beyond Working Software
How do we critically think about measuring a feature's success? This goes beyond building working software and testing it when it is in use. It moves our mindset from "How do I write the best requirement on my project?" to "How can I get the best feature out there to achieve a specific business value?” On face value the two sound similar, and seemingly marching towards the same goal, but it is not the case. It shifts the focus from the writing part -- the mindset of perfecting how you convey your needs and wants -- to whether you have met the goal expectation. This opens up a very different, broader question: we are not just thinking about how the feature will be tested but how success will be tested.
Start by asking the following as an example: if we were planning to pivot or pursue a particular path, how would we go about defining the tools that would gather continuous feedback? And let's say we have built the feature, what would be the triggers that tell us when it has fulfilled its purpose and it is time to retire the feature altogether? This is especially true if you do not want to keep accumulating features in the application that may or may not be adding value, but add to the overall cost of running and maintaining your application. Tell me, how often have we thought about the latter?
To achieve this, your backlog also needs to be leaner. Leaner in the sense that you may need to be prepared to drop a lot of requirements in your backlog to become that much more flexible to change, and to further reduce waste in potential re-work.
Let’s start with the premise that delivering feature value is NOT quite the same as delivering feature completeness. Don't ever plan to have all ten stories that are part of your feature completed in order to achieve that one business value. Challenge yourself, front-load the value to your first story and think about how you would do it, while keeping things like vertical slicing and INVEST principles intact. Another good practice to go along with this is to actively prune your backlog. Similar to pruning in the agricultural world where you are reducing unwanted branches or shrubs to encourage growth, you need to go through your backlog on a regular basis to make sure any obsolete requirements are removed. So, why bother with all this? Well, simply put, we do this so that we would be able to avoid big-bang releases from the very beginning, since our requirements are already geared towards smaller releases.
#3 Re-think your role as an Analyst
Aside from the changes in process, your role will likely change too. In order to be more responsive and adaptive, your role as an analyst will change for the better. It will involve more user research as you peel yourself away from the development room to talk to users. You would engage in guerrilla testing, and dip your toes in learning from what analytical data could tell you. You would probably collaborate with a lot more people, not just pairing with developers, QAs, operational folks, but designers and business stakeholders alike to keep your requirements real, and more importantly, no longer confined to the documentation. There could be many more examples than what is listed here, and these would need to be tweaked to make the most sense for your particular situation.
Can Continuous Delivery be practiced earlier in the development cycle, all the way from analysis? Do we play a part in this? Absolutely.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.