Test-Induced Design Damage. Fallacy or Reality?
There is an intriguing ongoing debate between Martin Fowler, Kent Beck and David Heinemeier Hansson (DHH) about Test Driven Development (TDD) and its impact on software design.
Though there were strong agreements and disagreements all along the way, one thing stood out to me all the way - there is no right or wrong answer. The answer depends on the context. There are various parameters and considerations to be examined to determine what answer you come up with.
As Kent said sometime during the beginning of Part 3, “Almost no question in Computer Science can be answered as A or B. The answer starts with “it depends” and what it depends on is the interesting part.”
After the 1st part, Fabio Periera published his thoughts about “Mockists Are Dead. Long Live Classicists". Below are my key takeaways from Part 2 of the series - Test-induced design damage.
Martin started with summarizing the themes:
-
Is TDD the cause of damage to design?
-
Is it actually damaging or is it a good thing?
-
How can we tell if something is damaging or not? How do we judge that?
DHH shared this example of “ test-induced design damage”. He opined that the larger the code base is, the more difficult it is to change because TDD creates multiple levels of redirection to be kept in sync. As a result - it is costly to change, costly to understand what the layers of indirection are doing, and costly to understand what the system is doing.
People get addicted to TDD leading to overkill. Using more code to do the same thing is bad - unless there is a domain complexity that needs to be broken down. The cost of every additional wrapper or layer of indirection is a high price to pay in terms of understanding, evolution, maintenance.
Kent offered the middle ground of focusing more on getting an understanding of when each of the design approaches is worth the cost and when it isn’t. You cannot apply all design ideas every time to everything. Context is important - else you will be in deep yogurt whether you write tests or not. While TDD does put evolutionary pressure on design, testability as a principle also puts pressure on the design. What is the right grain size for the tests? People want to go to either one end or the other. Whereas the goal should be about being aware of the different dimensions and adapting the style to the actual cost benefit structure that you are currently facing. It is ok to not take extreme positions on TDD.
Martin felt that this test-induced design damage has nothing to do with TDD. The driver is the desire to make something testable. Is the need for isolation from different layers the driver of why TDD leads to bad design? When people say they want to create indirection layers, they actually want to isolate themselves from specific layers (ex: rails / database). They want the app to be independent of the environment and the world around it
However, DHH countered that people see isolation as a goal because of TDD, because something that is isolated is easier to unit test. For instance, in the context of a MVC Rails application, isolating the layers and testing only the controller / application doesn’t necessarily help. The relationship between cohesion and coupling should be fluid and well balanced - at times you want to make things highly coupled instead of aiming for loosely coupled. In ideal situations - testing drives to a better design. Tests should tell me about how I can make my system better. Instead with most cases with TDD, the opposite happens- this is easily testable, thus it’s better. This is a fallacy that a lot of poor decisions are being driven off. It is not a trade off.
Building on the email example that DHH shared, Kent had another interesting counter insight - the option of reifying the intermediate result. For instance, in the case of compilers - parse tree, single assignment form, or some symbolic form of assembly code as intermediate results - the objective and tradeoff of reifying these intermediate results can add much leverage in the testability and design. Mocking is not the only way to eliminate external dependencies. In case of testing emails, there is probably a missing piece of the design when thinking about how intermediate results can help make the code more testable without needing to use mocks. When one faces difficulty in testing, or finds the system very hard to test, it is usually a symptom of poor system design. We should ideally aim for testable design that is easy to comprehend, easy to use and easy to modify in various different contexts.
In Summary
Martin, Kent and DHH brought great points to mull over on the table. What resonated with me was the emphasis on designing the testing plan, strategy and execution in the context of the product-under-test (something I’ve found to be very helpful in my experience) and a growing shift to using context-driven and testable design at the very start of the product life-cycle.
What are your thoughts on TDD and Test-induced design damage?
Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.