Evaluating software architectures is a critical part of the software architecture lifecycle processes. The book Evaluating Software Architectures: Methods and Case Studies covers the software architecture evaluation topic in detail focusing on evaluation frameworks like Architecture Tradeoff Analysis Method (ATAM), Software Architecture Analysis Method (SAAM), and Active Reviews for Intermediate Designs (ARID). The authors also discuss in the book some case studies in applying these frameworks as well as comparison of the software architecture evaluation methods.
InfoQ spoke with Rick Kazman, Visiting Scientist at Carnegie Mellon University’s Software Engineering Institute (SEI) and a co-author of the book, about the significance of evaluating software architectures and how to perform the architecture evaluations in Agile and Lean software development organizations. We also talked to him about the emerging trends in this space.
InfoQ: What was the main motivation for writing the book when you wrote it?
Rick Kazman: We felt that there was a need for a book that just focused on architecture evaluation. We had written lots of articles, published method descriptions, and other people were starting to create architecture-based methods. We felt like there was a need for a more comprehensive book on architecture evaluation. Also we felt that this book should include some substantial case studies because very few of these have been published over the years.
InfoQ: Who are the primary target audience for the book?
Rick: Practicing software architects, aspiring software architects, and project managers would benefit from knowing how this process works.
InfoQ: Can you talk about the process changes needed to integrate software architecture evaluations into software product development lifecycle processes?
Rick: There aren’t many process changes needed. Since originally writing this book, we’ve written reports on how to include architecture-centric methods into the unified process and into agile processes, and they really fit quite naturally as quality assurance techniques that you would do as part of any mature software development methodology. Basically, once you’ve got an architectural design concept you should evaluate it before you move on to putting resources into implementation. There’s always that transition from design to realization. The book encourages you to do a little testing of your design first before you start committing major resources to it.
InfoQ: How to evaluate software architectures in organizations that are using Agile or Lean software development processes?
Rick: There are two answers to this question. The first is agile or lean processes still need architecture. If you look at Kent Beck or Jeff Sutherland, thought leaders in agile processes, they all say the same thing.
On one hand nothing is different. On the other hand, in the spirit of agile and lean it’s certainly possible to scale down the evaluation techniques so that they can be done internally and with a relatively small group. You don’t need to make them a huge interruption to your development process.
InfoQ: Any difference when evaluating software architectures that are using new technologies like Cloud Computing or Mobile Development compared to the traditional application architectures?
Rick: To borrow a line from John Zachman, the enterprise architecture guru, “architecture is architecture is architecture.” This means the same principles apply from enterprise architecture, the biggest possible scale, all the way down to an individual system or a component of a system. The SEI architecture work over the last 15 years has shown that the domain and the scale simply do not matter – architecture is architecture.
InfoQ: How can the architecture teams make the outcomes of software architecture evaluation efforts visible to senior management teams in organizations?
Rick: There are two things architecture teams can do to make evaluation efforts visible to senior management. One part involves marketing. You have to keep senior management informed of what you’re doing, why you’re doing it, and the benefits of doing it. The other part is more technical. If you are measuring what you are doing then you can show hard data. You can show how many problems were found and how much money was saved by finding certain problems.
For example, AT&T kept track of the results of their architecture evaluations for about 20 years and they were able to provide a ROI estimate to their managers by asking each project what was the savings from having found these architecture problems early as compared with finding them later in the life cycle and they came up with a number of about $1 million per project on average [1].
InfoQ: What are some typical metrics, deliverables and artifacts of software architecture evaluations that can show the business value realized by performing the evaluations?
Rick: One main and obvious benefit of architecture evaluation is that it, of course, uncovers problems that, if left undiscovered, would be very expensive to correct later. Evaluation also produces better architectures. Even if the evaluation uncovers no problems, everyone can feel confident in the architecture.
There are a number of other benefits of software architecture evaluations. Some are more difficult to measure, but they all contribute to a successful project and more mature organization.
- Stakeholders in the same room. An architecture evaluation is often the first time that many stakeholders are in the same room. This allows for a group dynamic to emerge in which everyone is able to work toward a common goal: a successful system.
- Articulation of quality goals. In an evaluation, stakeholders are forced to articulate specific quality goals that the architecture should meet in order to be deemed successful. These goals are often not captured in any requirements document and provide explicit quality benchmarks.
- Prioritization of conflicting goals. Conflicts among stakeholders regarding goals are expressed and prioritized by the group.
- Clear explanation of the architecture. The architect helps those who haven’t been privy to the architecture’s creation a chance to understand it in detail.
- Improves the quality of architectural documentation. If the evaluation requires documentation the project benefits because it enters development better prepared.
- Uncovers opportunities for cross-project reuse. Because stakeholders and the evaluation team come from outside the development project, they are in a good position to spot components that can be reused on other projects or know of components that already exist and can be imported into the current project.
- Results in improved architecture practices. Organizations that practice architecture evaluation as a standard part of their development process will see better architectures not only after the fact, but before as well. Over time, organizations naturally position themselves to maximize their performance on the evaluations and develop a culture that promotes good architectural design.
InfoQ: Can you talk about some best practices the architects should consider when performing the software architecture evaluations?
Rick: You need to have the architecture documented before you can evaluate it.
There are also requirements for an architecture evaluation. For example, you need to have the right stakeholders in the meeting. It’s also important to have a trained facilitator in the meeting to make good use of time and keep everyone focused. Most of all, you need a commitment to improving the quality of the architecture, from all stakeholders.
InfoQ: It's critical in some organizations to ensure that the security and compliance requirements have been properly addressed in the software architecture models. How can one address the security requirements when conducting software architecture evaluations?
Rick: Validating security or compliance requirements is no different than any other architectural issue. If indeed the problem can be addressed by an architectural solution, then you need to capture the problem as a scenario, with an explicit stimulus and response and use this scenario to guide the investigation and analysis of the architecture.
InfoQ: What are the emerging trends in this space?
Rick: I am definitely seeing a trend towards more edge development. If you think about application platforms, like iPhone or Android, then the architecture is actually bifurcated. You’ve got the architecture of the platform itself and then you’ve got the architecture of the app that is written on top of that infrastructure. It really splits the evaluation activities and when in the lifecycle you would do an evaluation.
We’re also starting to see a little bit more tool support. In recent years there have been a lot of reverse engineering and analysis tools popping up. That’s something we didn’t see nine years ago when the book was written.
Architecture is also starting to find its way into more university curriculums, so there are more people who are versed in the language of architecture than there were 10 years ago.
About the Book Author
Rick Kazman is a Professor at the University of Hawaii and a Visiting Scientist (and former Senior Member of the Technical Staff) at the Software Engineering Institute of Carnegie Mellon University. His primary research interests are software architecture, design and analysis tools, software visualization, and software engineering economics. He also has interests in human-computer interaction and information retrieval. Kazman has created several highly influential methods and tools for architecture analysis, including the SAAM (Software Architecture Analysis Method), the ATAM (Architecture Tradeoff Analysis Method) and the Dali architecture reverse engineering tool. He is the author of over 100 papers, and co-author of several books, including Software Architecture in Practice, and Evaluating Software Architectures: Methods and Case Studies.
[1] Architecture Reviews: Practice and Experience IEEE Software, March/April 2005 (vol. 22 no. 2) pp. 34-43 Joseph F. Maranzano, Millennium Services Sandra A. Rozsypal, Lucent Technologies Gus H. Zimmerman, Lucent Technologies Guy W. Warnken, AT&T Labs Patricia E. Wirth, AT&T Labs David M. Weiss, Avaya Labs