Kent Beck, author of “Extreme Programming Explained” and “Test Driven Development: By Example” suggests software, like golf, is both a long and short game. JUnit is an example of a long game project – lots of users, stable revenue (at $0 is sadly for all involved), where the key goal is to just stay ahead of the needs of the users.
When I started JUnit Max it slowly dawned on me that the rules had changed. The killer question was (is), “What features will attract paying customers?” By definition this is an unanswered question. If JUnit (or any other free-as-in-beer package) implements a feature, no one will pay for it in Max.
Success in JUnit Max is defined by bootstrap revenue: more paying users, more revenue per users, and/or a higher viral coefficient. Since, per definition, the means to achieve success are unknown, what maximizes the chance for success is trying lots of experiments and incorporating feedback from actual use and adoption.
JUnit Max reports all internal errors to a central server so that Kent can be aware of problems as they come up. This error log helped find two issues. For the first he was able to write a simple test that reproduced the problem and verified the fix. The second problem was easily fixed, but Kent estimated it would take several hours to write a test for it. In this case he just fixed it and shipped.
Kent goes on say:
When I started Max I didn’t have any automated tests for the first month. I did all of my testing manually. After I got the first few subscribers I went back and wrote tests for the existing functionality. Again, I think this sequence maximized the number of validated experiments I could perform per unit time. With little or no code, no tests let me start faster (the first test I wrote took me almost a week). Once the first bit of code was proved valuable (in the sense that a few of my friends would pay for it), tests let me experiment quickly with that code with confidence.
Whether or not to write automated tests requires balancing a range of factors. Even in Max I write a fair number of tests. If I can think of a cheap way to write a test, I develop every feature acceptance-test-first. Especially if I am not sure how to implement the feature, writing a test gives me good ideas. When working on Max, the question of whether or not to write a test boils down to whether a test helps me validate more experiments per unit time. It does, I write it. If not, damn the torpedoes. I am trying to maximize the chance that I’ll achieve wheels-up revenue for Max. The reasoning around design investment is similarly complicated, but again that’s the topic for a future post.
Ron Jeffries, author of Extreme Programming Installed, replies: “I trust you, and about three other people, to make good short game decisions. My long experience suggests that there is a sort of knee in the curve of impact for short-game-focused decisions. Make too many and suddenly reliability and the ability to progress drop substantially.”
Johannes Link, Agile Software Coach, says: “I have seen a couple of developers who were able to make reasonable short-term / long-term decisions. I am yet to see a single team, though; let alone an organization.”
Michael O'Brien by contrast commented: “A great article and the right decision, I think. It’s too easy to get caught up in beauty and consistency when you’re writing code, and forget what you’re writing code for. I write tests because it makes writing code easier and gives me confidence the code does what I think it does. If writing a test isn’t going to help me achieve that, I say skip it.”
Olof Bjarnason, thinks that: “one relevant idea Kent brings up is feedback flow. If we focus on getting that flow-per-unit-time high, we are heading in the right direction. For example, he mentions short-term untested-features-adding being a maximizer of feedback-flow in the beginning of the JUnitMax project, since writing the first test was so darn hard to write (took him over a week). He got a higher feedback-flow by just hacking it together and releasing; his ‘red tests’ were the first few users and their feedback.”
Guilherme Chapiewski, raises the concern sometimes you think its a short game but it’s not. In Guilherme’s case that he decided to write a project without any tests as a proof of concept. It flew and people started to use it, quickly finding a few bugs that couldn’t be fixed. In the end he concluded the code was rotten and untestable. He threw it away and started again from scratch.
Kent replies to many of the comments saying: “I agree that confusing the practices and the principles leads to problems. And that tests lead to better designs. That’s why I have ~30 functional tests and ~25 unit tests (odd balance because Eclipse apps are so hard to test). I do almost all of my new feature work acceptance-test-first. It helps reduce the cycle time.”
Does this idea safely scale beyond one or two people? Aside from Kent Beck, do many people have the judgment to pull this off?