Back when I was sixteen I went off to University and studied Philosophy and Psychology, and after doing a number of excursions into other fields I got into software development through doing some work in expert systems and artificial intelligence; that was in the mid-eighties. At that time I run into the programming language SmallTalk which really changed my life. Not only the language and the way it made you think of things differently, but the people who worked with that language, a lot of people who I have really had the honor of working with, and of learning from (I will do the name propping later). In any case, as of the late 80's I was working in software development and I started working together with Kent Beck, and in the mid-90's he moved over to Europe and we started meeting up together regularly and eventually he ended up coming to the company I was working for. I worked as his assistant, and at that time we were putting together a bunch of ideas of something that eventually became Extreme Programming.
At that time it was obvious to us that this stuff worked, although nobody really understood why and how. And I guess that was the question that led me away from development into doing the project management aspect, which led to my work with Scrum, which I do a lot of, and now more in the direction of social complexity science, which is a relatively new science that (the way it looks) will provide us with a theoretical foundation for why this stuff works and give us some tools for adopting and adapting it.
That is a starting point. Social complexity science goes beyond... what you are describing is something we call "mathematical or computational complexity". The basic difference there is that mathematical or computational complexity says that the way systems evolved is by the action and interaction of what we call "agents," each of whom obeys a set of rules. And by them acting according to these rules, the system organizes itself and the behavior of the system emerges. The difference in social complexity science is that we say we are working with people, and people aren't birds or ants, and we don't like to obey rules. There are different ways that these agents in the system interact with each other, and there are different ways of trying to influence the behavior and the emergent properties of the system.
3. It sounds really abstract. As an Agile coach and a consultant why is this interesting to you?
It has a lot of practical applications. One of my favorite examples that I use is giving parties, which is something we like to do after we ship software, but it's a really simple metaphor that people can understand. One of the main properties of emergent complex systems is something we call "retrospective coherence". And I think by understanding this property you understand a lot more of why these systems work the way they do. Let me give you an example: imagine you gave a party last weekend at your house, and it was a great party and people called the police and the neighbours were banging on the walls and stuff, it was really a hit. And next morning you are thinking about why it was a good party.
Probably you had good music, invited the right people, had enough of the right stuff to eat and drink. So if I ask you to organize a party in a month or so, now you know all the factors that made this a good party and actually now that we know everything we have essentially done a real life simulation of this, we know exactly the way it works. So what we are going to do is we are going to invite all the same people, we are going to play the exactly the same music and I will give everybody a list of all the things they have to eat and drink. Do you think this party is going to be a success?
Sound pretty weird, let's leave away the list of things that people have to eat and drink, even then we're probably not going to be able to say "Yes, this is the key to success". And the reason why is because of what we call "retrospective coherence". In a socially complex system we have cause and effect, but since the cause and effect emerge as the system emerges, this is only obvious after the fact, which means it's only obvious, it's only coherent, in retrospect. Now, knowing this, you can't predict the behavior of the system in advance. And this makes it challenging to say: ‘Well how do we get to where we want? How do we get a successful party, or a successful system?" And the bottom line to make this really clear is: "What would you rather have: your initial features set or a successful product?" Because the two would never be equal.
Yes, there are too many factors out there that are unsure or changing. To make it simple: there are three vectors that we probably would work with on the level of software project. The first is requirements, which are changing, which are unsure in the beginning, and everybody knows that the only thing that you can be sure about, with requirements, is that they will change. The second is the technology you use, because that's constantly changing, and the third are the people involved in the project, people as individual entities and also people in their dynamics, working together as a team. And all these things change, and that's ok. It's a very challenging way to deal with coming up with a successful project, with a successful system if you have to deal with all this uncertainty. And this is where a number of methodologies which have been used in the past tend to break down.
7. Hang on, what? "Waterfall is a great methodology"? What are you talking about?
You didn't let me finish! It's very valuable to use in certain places, in certain domains, software development just happens to not be one of them.
8. Ha, ok.
Any place where you know things in advance, where you have a high degree of predictability and what you are looking for is scalability and reproducibility with a high level of quality, you can use a process like Waterfall or another ordered process, like Lean Manufacturing, that scales very well and allows you to do multiple units of one product very well. Software development isn't like that.
I am not quite sure of that. One of the aspects of social complexity science is something we call "sense-making," or in its full term "multi-ontological sense-making". What this is is essentially a pre-hypothesis technique of analyzing information without categorizing it, to figure out exactly what are we dealing with. And based on the research that I have been doing with groups of people, it seems that we are not always right in picking out what type of domain our problem belongs to, ergo, what type of toolset we should be using for it. And this is associated with something which we in psychology call a "cognitive bias". The interesting thing is that doing these exercises you learn not only what type of system you are working in, but why you are looking at this type of system as being this way. I can't give you an example that will fit in this easily.
10. So you've got this tool in your tool kit, of "sense-making", what other tools do you have?
One of the other tool that we use is something called "social network analysis".
11. "Social network analysis," what's that?
A tool for analyzing communication paths and contact paths between people, and analyzing what we call social- or shadow networks in organizations. Social networks or shadow networks are different from the hierarchical organization. What you see on the organigram is not the way things work.
Anybody who has worked in a large organization knows that to get things done you have to work differently than your organigram [ed: hierarchy chart] says. Look at a hospital: who runs a hospital? It's not the administrator, it's not the doctors.
12. Everybody knows it's the nurses that run a hospital!
It's the nurses. Who runs the army? It's not the officers, it's the sergeants. The stricter, and the more formal your hierarchical organization is, the higher the number of shadow organizations you have, and one of the research projects that my mentor David Snowdon did it at IBM, he found over fifty thousand shadow networks or shadow organizations inside of IBM, which is where the things actually get done. And finding out how these networks work is the way you find out how to get buy in.
13. Wow, so, "buy in" is a really important aspect of what coaches do, isn't it?
Yes, I'm just trying to think, if you want to look more in that direction one of the best books you can get on that is a book by a gentleman named Art Kleiner called "The Core Group: Seeing who matters most." This was done before the current surge of popularity of social networks. (The term "social network" started to become valued because it is applied not only to the research done in what's now become "value networks", but also in the idea of what's happening in networks like MySpace, which is a sort of trivialized, popularized form of that). But Kleiner's book has some very good ideas about what to do, how to find out who's in this social network, this shadow network, that has the power to decide where and how things get done.
14. So tell us about another tool.
Ok, one of the other tools we use is something called "narrative inquiry" or "archetypal narrative".
15. "Narrative inquiry" - is that like "appreciative inquiry"?
Not necessarily. The interesting thing is that appreciative inquiry is very valuable technique for trying to remember things which normally get forgotten. Narrative inquiry picks up the things that we tend to remember which are the bad things: why does bad news travel better? Why do we always tend to forget the positive things? And what we say: we value this bad news, because that gives us a history of things that went bad so we can avoid these mistakes. And this is a technique that has been around for thousands of years in other cultures.
One of the first documented uses of this is in the old Islamic Sufi culture, where when you screwed up, in order to pass along the admission of failure without the attribution of blame, you created a story and attributed it to the wise and incomparable Mullah Nasrudin. And you find there are volumes of books of stupid things that Nasrudin did. So if you were a Sufi and you screwed up, you wrote a story about something Nasrudin did. And there are stories that go back to the tenth, eleventh century. There is a story of Nasrudin landing at Heathrow airport and finding out his visa had expired.
16. Oh, this is kind of like "Chet did it".
Yes, this is like "Chet did it".
Same type of thing. Or Dave Snowdon says that his sun will admit to doing anything wrong, as long as it's his teddy bear who did it. This is something where you can do some real interesting practical applications. Remembering that negative narrative is often the most powerful technique we have for communicating information. Look at experience reports at conferences, offers the most popular tracks are the experience reports and the ones that get accepted best are the ones that also have reports of things we did wrong, or things that went wrong, problems we had.
Yes, I think it's one of the most important tools we have, rather than just saying "Everything always worked well" and that can get to sound like boasting, sometimes. Catching what went wrong gives other people a chance to learn from it and not make those mistakes. And actually I have started using this technique in advance, as a way of getting requirements from people.
18. So, you ask people "Tell me what you don't want in the product"?
Yes, and let's go even worse: "Tell me some of the worst things you've ever seen in this type of software. Tell me the stuff you don't want". And get a group of people from your customer together, and get them talking about that. And what will happen is a very interesting sociological phenomenon, which we call "one-upmanship" where somebody says: "I have seen this really bad stuff in software. Can you imagine how bad this was?" And somebody else says: "You think that's bad, just hear my story...". And people will start telling me things, things they don't want, and in order to compare them and show how bad they are, they'll start telling you about positive qualities and traits of the software that they would like to have. Go in and try that, that's one of the easiest techniques to set in and adapt really quickly, and you will be surprised what happens when you use it.
19. Can you give me an example of how that "one up" conversation might go?
"One of the worst error messages that I ever saw was a modal dialog box that popped up in the middle of this screen that didn't let me see what the wrong data was below it, and that I could only click on to get rid of, which wouldn't let me change the data!" And somebody else says: "Oh, you think that was bad. I got one that I went on to the next web page and it lost all state and I wasn't even able to see what I had put in that was wrong". And in order to keep getting the one-upmanship here, somebody eventually says: "Wouldn't it be nice if..." And that's a trait that just happens in narrative, that when it tends to go too strongly to the negative side somebody would bring in something positive, either to show how bad that actually is or just to balance it out.
20. They say "why can't they just do this?" and come out with the simple thing?
Yeah, "Wouldn't it be nice if", or "Why couldn't they just do". And those are the weak signals that you can pick up on, to find out the things that become really important to them.
That's right, this is another case that we call "ontological myopia". Best practices are also very valid within certain ontological boundaries. And most of the work that we do is in a complex realm, where there is no "best practice" because the practice emerges as the system emerges. Remember, "best practice" is valid, but it has a number of very implicit assumptions that we tend to forget, and one of them is "Oh, there is "a best way" to do something." Another is: "Oh, we are able to get to codify this knowledge." and this goes into the knowledge management dichotomy between context and content: we were able to extract this from a context and just adapt the pure content. Third, we are able to get people to apply this best practice and fourth: it's in their best interest to do so.
22. So, you don't think that those assumptions underlying the "best practices" concept are valid?
They are true within certain domains, within certain bounds. What we believe is that a large part (and this is something that research also supports) ... is that a large part of the activities that we do, a large part of the issues we deal with, are not in this domain.
This is the focus of my research and I am slowly getting to the point where I can start talking about and giving ideas and tools to other people, but it's not quite ready for prime time yet. This is pretty bleeding-edge research. What I do in my day-to-day practice is use a number of the sense-making techniques to help look at issues and structures in teams in a different way, and use these same techniques to design creative interventions in order to find different ways of getting teams out of the hole that they are in, or off the beaten track that they have been on.
One of the walls that you hit is realizing that, when you work as a team, unless you are in a very small company, you tend to work in the context of a larger organization, with different teams that you have to interface with. Now these teams will either be open and Agile themselves in which case you'll start resonating with each other, or they will tend to be stiff and old fashioned, in which case you'll start running up against the wall and if you don't watch out you'll start beating your head against the wall. And part of the work there is trying to design buffers (essentially just pillows, so that you don't bump off the wall). And so that you can stabilize the situation, so you can even design an interface on a personal level of "How do we deal with these people, and how do we deal with this team?"
Go back to the three main points that we have in defining a socially complex system: our requirements, the fact that requirements unclear or emerging, the fact that we have to deal with our customers, our technology.
26. The whole issue of getting the product backlog together is difficult for some teams?
The whole issue of defining or deciding on what technology you are going to use. Anybody who has been in more than one project knows that your technology is normally decided not on a meritocracy basis, but on a political basis. Another one is dealing with the people in your team and all their idiosyncracies. You don't even have to go outside the team to find out where the problems are.
27. Can you give us an example of using social network analysis in your work?
The thing is that SNA is a very touchy technique, because the problem with social network analysis is that you cannot be anonymous. An example: about a year and a half ago I did a project retrospective at a client site in London together with Rachel Davies, and one of the experiments we did there was just doing a simple social network analysis of the team and their interfaces to find out who was influencing the team from outside, and where are the connections. And this was a good team that has been together for a while and quite happy working with each other and there was nothing we needed to fear about something coming out, so we said: "Hey why don't we just try this as a fun experiment for all of us".
We did the questioning, got the data together and put together a social network graph and I showed it on a beamer [projector] without any names and within five seconds people were able to point out who the individuals were, without even having any names on there. Now once you get into more tricky questions about interpersonal things, who really knows what's going on in this organization. "Who do you confide in when you have questions or problems? Who do you need to get buy in, so that something would actually get done?" You get into really ethically touchy areas and that's why SNA is probably one of the techniques that is going to take longest to find an establish way to be able to really use it.
You will be surprised to find out that we have our own type of Nasrudin story in our culture already, we just don't see it as such. We all know it's called the "Dilbert Cartoon". The characters in Dilbert are archetypes for different personality and behavioral patterns. Why do Dilbert cartoons hang on the walls of most of our offices? Why are those the ones hanging on the walls and not other ones?
Ok, one thing that I have done with one of my clients, who is interested in capturing this knowledge without being attributed to "Oh, you screwed up", was to analyze the types of archetype that came up in the organization. Archetypes are different from stereotypes; stereotypes are classically somebody you would like to be, or somebody you are happy you're not. Archetypes are a mixture of attributes, so that you can identify part of yourself in every archetype. We did an exercise to analyze and discover a number of archetypes and then we had a cartoonist come in and draw a number of head and body positions of cartoon characters illustrating these archetypes. When you screwed up your job was to put together a cartoon in Photoshop or in whatever describing this thing and put together you own cartoon collection.
30. Building cartoons? No kidding, that's great!
But this gets back to one of the most important things that we have to realize: this idea of transparency in everything. This is one of the first things I teach in my Scrum course: "We don't make mistakes, we learn." And one of the greatest challenges that we have is getting away from a mistake- or blame-driven culture into a learning culture, because that is something we have been taught from our time in school. In school you don't learn... what you realize in school is how to get the answers outside of school so that when the teacher comes in and asks "Who knows bla bla bla, what the answer to this question is?" you are the one to put up your hand because you have learned it outside and you can show off that you know it already. That's what schools are teaching us.
And any time you get blamed for trying something and not being successful, you are going to be scared about doing that again. And any culture, any method that deals with removing waste, removing being able to try things because of the blame culture, is at least in my eyes just going in the wrong direction.
Yes, I did a session at Agile 2006 called "Making Sense of Agile" and it was a small session, but everybody who was there was very happy with it, and it seems that they were so happy that I got invited back to do it again this year, and to do a session they gave me twice as much time, which means that we get the chance to do some more of these exercises to learn about these techniques and how to use them in real life situations. So if you want to come along, come to Agile 2007.
Yes, it is one of the techniques I've used inside a couple of companies, where we were trying to put together multi-site teams of people, where we needed a balanced team profile on a skillset. Since people came from different sites, and one of the criteria was having each team that we had spread out over different sites, I just let people run through a speed dating exercise and talk to each other for a couple of minutes and self-organize into the teams that they'd like to be in, with the requirement that we need one person with UI skills, one person with middleware skills, one person with hardware skills and things.
33. How long did that take, and how did it work out?
There were twelve people. In speed dating you have three minutes a pair. Essentially the exercise took about an hour, with them going and talking with everyone else and then doing some clustering, and then we went over and had lunch afterwards which provided the social context in order to continue this discussion and find out if these are the people you really wanted to work with. One of the basic ideas behind social complexity is: set up the environment, set up the context correctly, and good things will happen.
What's next for me is going back, and there is a bunch of research data that I have collected from doing these exercises with different groups of people that I would like to finally get a chance to sit down and work through and analyze to get some more valid, statistically significant proof that this stuff actually does work.