1. Will you introduce yourself first?
My name is Steve Freeman, I'm an independent development person, based in London.
One of the things we were just talking about before was why we ended up writing it. Partly, it was because we felt frustrated we weren't getting the message across, in particular the mock object techniques. It wasn't supposed to be an all-encompassing thing. Let's write the about the technique, get the word out there. At least people will know; they can disagree with us from a point of actually knowing what we we're trying to say. We're trying to clarify that because in the early days, quite few people jumped on it and didn't quite get the point of what we were trying to make, they got some other points, but not the ones we were trying to make. It kind of got out of hand and took rather longer than that. What we ended up is backing into - if you like - some of the design styles that we've adopted. The book is really about the design techniques we use, that took us in that direction as much as any particular detail technique.
It's nice because we had a very strong community in London in those days, in the early days. It was a very tight little group. It's somewhere in the acknowledgments that we spent a lot of time arguing about stuff, which tested the ideas. One of the things we really liked as we were working our way through the book was that it comes out of various experiences that we've all had. A number of us worked in SmallTalk, which has a very - I think what Ralph Johnson called a mystical view of objects - that there are these things and you send messages to them. If you do that stuff inside, it's our emphasis and it's very much about the messages between objects, rather than more what's inside of them.
You pick whichever you want to belong to. One of our standard diagrams is you got this graph of objects and they are all connected up and - the objects are the circles and the lines are the connections - what we do is we fade out the objects and concentrate on the connections because for us not always, but in a lot of the code that we write, that's the interesting bit. The languages we use make that hard to seek because they concentrate much more on what's inside the circle. The other sort of influence is certainly Nat a very strong background in distributed systems and distributed objects and I have a little bit of that. But again, you start to think in terms of protocols, rather than individual message calls or individual methods - how do this things talk to each other and what's the sort of relationship between these calls? Within the circles, within the blobs, which are the objects, they do whatever it is they do and this is not a hard business. When you do that you find that it gives you quite a nice flexibility in the code, because you can take this one out and pluck another one in.
Yes. Some of that came from a challenge. There used to be an architecture group in London and it turned into an Extreme Tuesday club but I think it was Peter Marx or John Nolan who put this challenge down that said "What happens if you wrote code with no getters?" It's not necessarily something you actually want to do, but it's a really useful exercise for forcing the way you think about code and you can go quite a long way with that. The trick with a lot of these exercises is to push them too far. Then, you can come back a bit when you know where the edge is, but it's amazing, when you try a lot of these techniques, just how far are the edges.
SF: It's kind of hard to make the case with some of these, because in the end it's all just methods and values and the rest of it, but it's more about a view on the code, or a view on the design patterns and a way of thinking about things that takes you in certain directions.
SF: We're all falling into that trap at some point. One of the things we clarified in the process of writing this all up was a distinction between internal stuff and external stuff. If you got objects some bits of it are internal, some bits are collaborators and what you want to do is just mock all the stuff that's collaboration, what that means is there aren't clean rules for that.
It's like a heuristic or it's like as a matter of taste - it's not quite the right word- you develop a sense for it. What you end up with is not writing very detailed expectations for every tiny little thing that's inherent. Some things are value objects, sometimes you don't want to use the real thing and it's just the collaborations where you have an interaction with the outside world, if you like, that you mock. In fact -and this is one of the failure modes - if you got this long sets of expectations or stubs, whatever you want to call it, that means that something is going wrong either with the test over the design, or all of the above.
We're all falling into that trap at some point. One of the things we clarified in the process of writing this all up was a distinction between internal stuff and external stuff. If you got objects some bits of it are internal, some bits are collaborators and what you want to do is just mock all the stuff that's collaboration, what that means is there aren't clean rules for that.
It's like a heuristic or it's like as a matter of taste - it's not quite the right word- you develop a sense for it. What you end up with is not writing very detailed expectations for every tiny little thing that's inherent. Some things are value objects, sometimes you don't want to use the real thing and it's just the collaborations where you have an interaction with the outside world, if you like, that you mock. In fact -and this is one of the failure modes - if you got this long sets of expectations or stubs, whatever you want to call it, that means that something is going wrong either with the test over the design, or all of the above.
We're trying to be sensitive to that and it's a skill that we learnt over time. It's the test of getting unpleasant and painful, that's the clue. One way to respond to that is to push it all out and refactor it all out and what you end up with is something you just can't understand because there are too many moving parts. It's learning when to recognize the things just aren't right and you need this missing construct in there somewhere, which is what you ought to be talking to instead of all these 17 different collaborators you think you've got at the moment and then being able to respond to that. It's learning to be sufficiently responsive to what's going on in the code.
There are a good number of case studies out there. A lot of the text that introduces anything - but you test them as well - is they have these little examples and they are fine and they make the point, but when examples are small enough, it doesn't matter what you use. You could chisel out the contents with in stone and that would still work, because it's small enough. A lot of the stuff kicks in when you have a big enough example and it's not a terribly big example in the end, it's only a few classes.
One thing that is strong on this is the notion of end-to-end includes things like deployment. You start with version control and bail a method and you go the whole way through because quite often, if you don't do that, that's the bit where the mistakes happen, whatever it is that you are not exercising. Particularly in practice in enterprise systems there is something that is just too hard, because you can't get hold of an instance, I mean there is one system that worked on where the downstream system, they didn't have a proper test for it, so we only got it an hour a week. So, we couldn't get the whole build thing going, but we did the best we could. We made the effort, we did the best we could with what we had available and I think that is quite important.
You get into an interesting balances there again, because it's where you had to put some care and attention into the tests, particularly at the high level because if you just keep pushing, you end up with this combinatorial thing and everything grinds to a halt. It's one of the reasons why you see the "growing" word in the title because the gardening metaphor is very appropriate - it's constant care and attention.
It's one of the things we've both exercised about is we don't always succeed, but to the extent that we can is trying to make stuff readable and that counts for the test as much as for the code.
12. JMock has a very fluid style, the fluent interface style.
We get a lot of flack for the current JMock structure and it's opinionated software. It does what we want it to do, it doesn't suit everybody. It is a pretty disgusting hack the way it works, because of the double bracket thing.
We worked better with Java 1.0, which had trick in the editor, the IDE and we set punctuation to white and then the stuff would just read, which actually happened by accident. We hadn't thought it through that way but then we tried it, as we pushed the ideas it ended up to another attempt to reintroduce SmallTalk.
Yes, but as I was saying, that was more by accident than by design. The double brace trick does bring out the idea of the protocol, which you don't get with some of the other libraries. We do that because it's important to us.
It's a little bit about these, but it's more about getting in the right place. The right path for doing that is a way of managing scope. All that stuff that happens in the double braces is within the scope of the set of expectations that you are about to construct. One of the tricks that you get out of that is that you get better completion on the IDE. The version 1.0 was better, but we still retained some of that. When you complete, you get prompted for stuff that's in that context. It's not perfect, but it is more scoped.
In this case, there are a number of forces in there, but it's also because it's test code for us, it's support for test coding. We can do what we like in there, because it's for us. It's actually one of those things that I found IDEs have more effect on your coding style than people get credit to. The standards are a religious war, another vi versus emacs - it's Eclipse and IntelliJ and I think one of the differences is that Eclipse is especially doing the multiple windows and IntelliJ has this model where, although you get some of this stuff attached from the side, it's really single pane. I find that the coding style test a different opinion a little bit, because it depends on which context you are going to switch.
The interesting thing is in the Visual Studio, where you've got the regions so you can collapse stuff some way. Whether you like it or not, it exists and people code to that and now you've got the partial class thing, as well, which is perhaps not done properly.
Speaking for myself, I'm quite happy to let the constraints of the code follow the code where it goes, if you like and that you use things like some of these tools to some of these environments to sort of nudge you in the right direction.
20. Things that should be maybe problematic?
Yes, look for some sort of stress points in the code. Sometimes you have to leave that a little while, because if you jump in too early, it's not clear which direction you should take it. Quite often or from time to time, you just need the code rot a little bit, as long as you remember to go back and fix it.
I'm impressed with watching the DVD for Wall-E and they were saying about one of the scenes where Eva rescues Wally this is deep to this point. Apparently the first cut they had it the other way around or something like that, I can't quite remember what it was, but the scene was structured completely differently. They went all the way through the previous and it was something like almost done to the end and they said "This isn't right!" They spent the money and they fixed it and because the characters were wrong or they didn't do the characters right and they.
It's part of us, because, if you are putting a film out, once you've done it, it's going to last. In our world, things are a little bit more flexible. It depends on what you are doing, but they are usually a little bit more flexible. In the end, code seems to last forever. It's always tricky because the financial cycles, the accounting isn't right but you're probably going to be the one that comes and fixes this later, so you might as well get it right now.
Now we're about the 10 year mark with Agile and XP. It's a whole generation come up without the previous history.
One of the nice things about the new generation is they haven't got our inhibitions. It's always a generation thing, but they haven't got some of the backup and smaller things. We got Eric Evans talking today and he's been on this mission a little while because he is pointing out that the people who did the initial XP project, the CC project all that come from this long tradition, but also they had all this experience before happened, which fed into it. It's like the material in the book - it's not until you take it apart and try to write about it that you understand where it comes from, it's kind of in the back there because you made old mistake, so you got to some different mistakes. His speech at the moment is that the historical stuff we can use or previous experience that we can use but without carrying the dead weight to the old that we got into for so long.
It's certainly true. Rachel Davies said that a lot of team she visits just don't seem to spend enough time figuring stuff out before they do it, just stepping back and looking at the design and working it through.
27. A little bit of over reaction we've done in the Agile community.
You might say you push it too far and you come back.
28. It's all the learning process.
One of the things I was thinking about was I saw a Henry Petrovski talk some years ago at OOPSLA - he is an engineer, he's a very good speaker, he's got various books on bridge building. His claim was that you can trace back through time immemorial the bridge failure every 30 years, with a new design. This pattern could be followed all the way back to the Victorians. His argument was what happens is you come up with a new design, so it's a new kind of suspension bridge or something and you do it and it's great and you're over the engineer like crazy because you don't know what the limits are. You are using old limits plus a bit for safety and then what happens is, because engineers are engineers, they try to make it lighter and cheaper and faster and they start cutting away at the limits.
This goes on for a while and the bridge is getting lighter and cooler or whatever and eventually, someone goes too far and the bridge falls over. Remember the Tacoma Narrows? The bridge over the Tehran and all that stuff? That's because at that point, the generation came over the idea and had all this history gone. They come over this thing and they haven't yet experienced where the limits are, so they are going to find them for you. I suspect we get this with every 2 or 3 different cycles now, the object thing and then the Agile thing and maybe some other things as well. What we should expect is that we push it too far and then we have to come back.
30. Yes, you can get that code pretty quickly, right?
Some people disagree, but he IDEs are more the point of garbage collection. The worst bug I ever had took 3 weeks to find, that was C++. C is underlying and C++ on top of it. We just don't have to deal with that any more, on the whole.
32. That still happens in Java and C#.
That's right. Not so much the compiler, but the test harness and the rest of it. You bump into good teams whether they are on a 30-40 build and it's like there was never a single moment when it actually went wrong - it's just they were busy and there were deadlines.
The issue with that is the way you fix it, if you care enough to go and fix it. Because none of them is ever going to be anything that you can't fix.
That's the point. But I was thinking that there was something I was reading about in the standard Toyota thing and if you are doing that kind of optimization with the some of the Lean stuff, it's not big thing, it's not an 80-20% thing because, if you have a production line and it takes you 6 hours to make the thing, it's not the case that in the middle of that it's going to be a 3 hour delay. It's going to be lots and lots of little delays all the way through the pipeline. I think that's the way with build stuff. We have this extra checking thing test that never gets us any errors and so we can get rid of that. We paralyzed a little bit here and we paralyzed a bit there, but it's just the sort of continually checking away and caring enough to. When you got an hour or 2 here, you go away and you chip for the build to make you faster.
I think a part of what we're trying to write up in the book is the same applies to the code and whatever you want, the requirements, the whole chain at some level takes a little bit of attitude, say "I'm not going to put up with this."
I was talking to a client the other day and they had this discussion about being pragmatic and the rest of it and that's true, but the other part of it is the reason I do this stuff and I use all these techniques - because they make me go faster. I'm not doing them because I'm worried about losing my Agile license, I do them because when stop doing them I get into trouble, usually quite quickly something goes wrong.