Following up a pot-stirring blog where he asserted that "anyone who continues to think that TDD slows you down is living in the stone age", Bob Martin takes a stab at providing some deeper insight into the real applicability, role, and benefit of TDD.
He begins by taking on this big question: "Is TDD is a replacement for architecture?". His example-backed answer, 'no, BUT...':
The notion that you can generate a viable architecture by starting with a blank screen and then writing one test case after the other is sheer folderol. There are decisions that you need to make that have nothing to do with tests.
Of course many of these decisions can, and should, be deferred for as long as possible. For example, the database schema is something that can likely wait for quite a long time. The decision to use Spring, JSF, Hibernate, JPA, etc. can also likely wait. The beauty of business rules is that they can, and should, be implemented independently of database and GUI models.
...
Here’s the bottom line. You cannot derive a complete architecture with TDD. TDD can inform some of your architectural decisions, but you cannot begin a project without an architectural vision. So some up front architecture is necessary. One of the most important up front architectural activities is deciding which architectural elements can be deferred and which cannot.
Having answered the architecture question, Martin moves on to tackle the next logical topic: "Is TDD a replacement for design?". The essence of his answer being this:
No. You still need all your design skills. You still need to know design principles, and design patterns. You should know UML. And, yes, you should create lightweight models of your proposed software designs.
...
The bottom line is that TDD is a design technique but should not be the sole design technique. All the old design rules and skills still apply; and TDD is a powerful way to inform and augment them.
Tying back to another statement from his "stone age" blog, Martin posits himself the question of "Should TDD be used for every line of code?". Again, the answer is "no":
No. There is a set of problems for which TDD is not particularly helpful. GUIs are an example.
...
Of course it’s not just GUIs. It is the notion of fiddling that is the key. If you must massage the code into place. If you must fiddle with some aspect in order to please the customer. If there is some uncertainty that can only be resolved by a very rapid cycle of edit-and-run, then TDD is likely to be more of a hindrance than a help.
...
The trick to manage this is intense decoupling. You want to make sure you identify every bit of the code that does not need to be fiddled, and separate that code into modules that you can write with TDD. Make sure that the fiddled code is isolated and kept to a bare minimum.
Having conceded that some tests are in fact better written after, Martin goes on to re-iterate that should be done only when necessary (when "fiddling" is required). He states the primary reason behind this as that "it greatly enhances the chances that every line and every decision is tested", explaining how even the most disciplined programmers are bound to write some degree of un-testable code if the tests aren't written first.
Uncle Bob then poses this interesting question: "Given that we accept the need for tests, why the resistance to test-first?". To this, he posits the hypothesis that some people just aren't able to think through code incrementally:
Honestly, I don’t know [why there is such a high resistance to test-first]. Clearly it can’t be a productivity issue since we are going to write the tests anyway.
Perhaps some people don’t like the fact that writing tests first interrupts the flow. It’s true, when you write tests first, you cannot write a whole algorithm. You have to assemble that algorithm bit by bit as you add one test case after another. Maybe some people just don’t feel comfortable working this way.
Martin's final remarks give response to this common statement: "Wouldn’t it be faster without [having to worry about] such high test coverage?". First he concedes that getting high coverage in place for a legacy environment (one where the code does not have tests) does require a potentially high, long-term investment. In non-legacy environment and for new code within a legacy environment though, his answer is much different; in this case, high automated test coverage speeds you up. His reasons why:
Firstly, you don’t do much debugging. How could you if you have tested virtually every line of code? My own experience with debug time is that it all but disappears. In the last year of intense development effort on FitNesse I have spent almost no time debugging. If I had to quantify that time, I’d put it at 5 hours or less.
Secondly, I simply cannot inadvertently break the code. The test suite finds such breakage within seconds! And this makes me fearless. When you are fearless, you can go a lot faster.
Thirdly, My tests are little examples of how to work the system. Whenever I forget how some part of the system works, I read the tests. They quickly get me back up to speed.
Fourthly, I’m not fighting a continuous barrage of bugs from the field. Even though I have thousands of users, my bug list is tiny. The time I spend in support is less than an hour a week, and usually that’s just pointing people at the right spot in the user guide.
Check out Bob's blog for more detail and concrete examples of these ideas, and be sure to also take a moment to read through the immense amount of feedback and additional nuggets posted in the comments.