You don't have to look too far to find people still asking whether or not SOA can be used successfully. There are often as many examples of where SOA failed as there are where it succeeded. Furthermore, the exact reasons behind the success or failure are often shrouded in mystery, maybe because the authors don't know or can't discuss due to confidentiality reasons. In the case of success, one of the common threads though is precision and good governance/management of the project. But rarely do we hear the words 'formal' and 'methods' used, despite decades of experience showing that system specification and verification based around strong mathematical principles can work extremely well.
Back in 2007 we discussed the W3C work on WS-CDL. As was mentioned then, WS-CDL is based around Pi Calculus and ...
CDL enables the business stake holder, the business analyst, the enterprise architect and the application engineer to share their views of the same system in a synchronised fashion, by providing the means for each level of detail for each stake holder to be captured without that detail being necessarily exposed to the others. Also CDL provides the necessary provenance to enforce requirements at each level. In this fashion, CDL also enables the A in SOA, since it provides the manner in which architecture can be modelled, described and implemented.
A few months later, Steve Ross-Talbot (one of the main drivers of CDL) wrote about how he saw CDL as the computer science equivalent of the micrometer ...
... before a line of code is cut the CDL description is shown to be valid against the requirements and to be correct in computational terms (i.e. free from live locks, dead locks and race conditions).
Well Steve is back and the methodology behind CDL has become known as Testable Architecture. In this entry (and an associated video presentation) Steve discusses a couple of real-world (insurance based) use cases where he has seen WS-CDL used to reduce the time to develop and deploy SOA by 80% and improve the chances of success.
The background, in both cases, was an in-flight SOA-road map that was being executed. The road map required integration of legacy applications and portal development to support a user experience.
He then describes the initial non-CDL based estimates for developing the solutions:
In the first instance the allotted time to develop a solution and provide the technical contracts was of the order of 15 days effort. This would involve gathering requirements, functional and non-functional, looking at message formats to support the data model, understanding the business transactions so that the solution could be effectively monitored, delivering service contracts for the mapper and mediator services, providing state diagrams to show the individual service behaviors needed and sequence diagrams to provide a collaboration context. The total number of services was 4 including the client, the mediator, mapper and legacy service.
Steve mentions that the second example was more complex and the estinate was on the order of 60 days. The use cases were then tackled using the Testable Architecture methodology by framing them in "UML like notation" and creating sequence diagrams to feed the CDL model, which was then tested against those sequence diagrams. The results were interesting (though obviously from a scientific perspective they would need to be reproduced to have meaning elsewhere.)
In the first case it tool less than one day to gather the requirements and deliver the tested model. In the second case it took 4 days to do the same. The technical contracts, the WSDL, the BPEL, the BPMN, the state diagrams, sequence diagrams and much more took a few clicks as they were generated from the CDL model. It may sound fantastical and mythical that we compressed the effort by some 80%. But the fact is that we did. Not only that we found design flaws early both in our use of BPEL and WSDL. We found competing requirements and corrected them and we produced technical contracts that were far more rigourous and complete and unambiguous than we had managed to achieve by hand.
As he points out though, describing something well and precisely has the effect that you can better understand what is required (or believed to be required). It can be validated before deployment and any problems explored thoroughly without impacting users. Plus the use of an agreed formal method approach allows individual components to be developed in isolation with a much stronger basis for believing that when deployed together the final implementations will work as designed. Or as Steve says:
[I believe] that the impact of description through a formal method will help to change the way in which we can effectively do more for less with higher quality and so move the entire process of creating solutions up one level of abstraction with no loss in our ability to express a solution.
As we've already mentioned, it's hard to prove something either way with one data point. But if these results are reproducable they could represent an interesting addition to a good SOA architect's tool belt.