Why Software Development Methodologies Suck
There’s a lot of dogma in the religious wars around software development practices and methodologies. Are phase-gate methodologies effective at managing the risk of software development, or just risk management kabuki? Does TDD really make for higher quality software? Is pair programming a superior replacement for code review or just a way to inflate consulting rates? I’m going to argue that while scientific evidence to decide these claims is lacking, there are two general principles which can help us choose good practices while at the same time improving the value of the software we deliver: reduce cycle time and increase feedback.
Michael Feathers makes the following observation:
I think that, in the end, we just have to accept that developer skill is a far more significant variable than language choice or methodological nuances1. Frankly, I think we all know that, but we seem to suffer from the delusion that they are the primary knobs to tweak. Maybe it’s an extension of the deeply held view that from an economic viewpoint, it would be ideal if people were interchangeable.
The problem is, how do we get skilled developers? Since the concept of individual productivity in IT has never been satisfactorily defined, this is a particularly hard problem to solve. Lines of code – still a popular measure – suffers from the devastating flaw that a line of code is a liability, not an asset as is often thought. Measuring number of hours worked encourages heroic behavior – but experience shows that the “heroes” are usually the same people that cause projects to become late through taking unacceptable risks early on, and working long hours makes people stupid and leads to poor quality software. There is still no generally accepted set of professional standards or chartering system for IT professionals, and recruiting good people is very much an art rather than a science.
Psychologists have at least addressed the problem of why it is so difficult to acquire and measure skill in IT. As Daniel Kahneman says in Thinking Fast and Slow, there are “two basic conditions for acquiring a skill: an environment that is sufficiently regular to be predictable; [and] an opportunity to learn these regularities through prolonged practice.”
But traditional software projects are the opposite of a regular, predictable environment. The only good measure of success of a project – did the end result create the expected value over its lifetime? – is so distant from the critical decisions that caused that success or failure that it’s rare for anybody from the original team even to be present to get the feedback. It’s practically impossible to determine which of those decisions led to success or failure (in artificial intelligence, this is known as the credit-assignment problem).
These factors make it very hard for IT professionals to acquire the skills that lead to successful products and services. Instead, developers acquire the skills that allow them to most efficiently reach the goals they are incentivized by – usually declaring their work “dev complete” as rapidly as possible irrespective of whether the functionality is integrated and production-ready – and similar problems arise in other functional areas too.
The fact that software projects are complex systems rather than regular environments leads to another problem – the extreme difficulty of gathering data on which techniques, practices, and methodologies are actually effective, and the near impossibility of generalizing this data outside the context in which it was gathered.
In his excellent book The Leprechauns of Software Engineering Laurent Bossavit executes a devastating attack on software development folklore such as the “cost of change” (or “cost of defects”) “curve”, the claim that the variance in developer productivity is an order of magnitude, the idea of the cone of certainty, and many other cornerstones of methodological lore in software development. He shows that these theories – and many others – depend on very small sets of data that are gathered either from informal experiments run on computer science students, or projects which cannot possibly have been effectively controlled. The organization of the studies that form the basis of these claims is often methodologically unsound, the data poorly analyzed, and – most egregiously – the findings generalized well beyond their domain of applicability2.
As a result, it’s not possible to take seriously any of the general claims as to whether agile development practices are better than waterfall ones, or vice-versa. The intuitions of “thought leaders” are also a poor guide. As Kahneman says, “The confidence that people have in their intuitions is not a reliable guide to their validity… when evaluating expert intuition you should always consider whether there was an adequate opportunity to learn the cues, even in a regular environment.” As Ben Butler-Cole points out in his companion post, “why software development methodologies rock”, the very act of introducing a new methodology can generate some of the results the adopters of the methodology intend to bring about.
You might think that puts us in an impossible position when it comes to deciding how to run teams. But consider why software development is not a regular environment, and why it is so hard to run experiments, to acquire skills, and to measure which practices and decisions lead to success, and which to failure. The root cause in all these cases – the reason the environment is not regular – is that the feedback loop between making a change and understanding the result of that change is too long. The word “change” here should be understood very generally to mean change in requirements, change in methodology, change in development practices, change in business plan, or code or configuration change.
There are many benefits to reducing cycle time – it’s one of the most important principles that emerges when we apply Lean Thinking to software development. Short cycle times are certainly essential for creating great products: as Bret Victor says in his mind-blowing video Inventing on Principle, “so much of creation is discovery, and you can’t discover anything if you can’t see what you’re doing.”
But for me this is the clincher: It’s virtually impossible for us to practice continuous improvement, to learn how to get better as teams or as individuals, and to acquire the skills that enable the successful creation of great products and services – unless we focus on getting that feedback loop as short as possible so we can actually detect correlations, and discern cause and effect.
In fact, the benefits of having a short cycle time from idea to feedback are so important that they should form one of the most important criteria for your business model. If you have to decide between creating your product as a user-installed package or software-as-a-service, this consideration should push you strongly in the direction of software-as-a-service (I speak from experience here). If you’re building a system which involves hardware, work out how you can get prototypes out as quickly as possible, and how you can modularize both the hardware and the software so you can update them fast and independently. 3D printing is likely to make a huge impact in this area since it allows for the application of software development practices to the evolution of hardware systems. Working in cross-functional teams is more or less a requirement if you want to achieve a sufficiently short cycle time.
Software methodologies – even the “hire a bunch of awesome people and let them self-organize” methodology – suck because they so often lead to cargo-cult behaviour: we’re doing stand-ups, we have a prioritized backlog, we’re even practicing continuous integration for goodness’ sake – why is the stuff we make still shitty and late? Because you forgot the most important thing: building an organization which learns and adapts as fast as possible.
1 Although as Laurent Bossavit points out (private communication) “A developer’s skill is in part the method he/she knows and his/her reasons for preferring one language over another.”
2 I am not suggesting that we give up on running experiments to learn more about what works and what doesn’t in software development, and the contexts in which such claims are valid – quite the contrary, I’m saying we’re not trying nearly hard enough.
This post is from Continuous Delivery by Jez Humble. Click here to see the original post in full.
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.