BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Trust is good, Control is better - Software Architecture Assessment

Trust is good, Control is better - Software Architecture Assessment

Bookmarks

Testing implementations is an important means to obtain information about a system under development. Likewise, code reviews help to keep the code quality high. What is very common for code, gets sometimes neglected for the backbone of a system, that is for its software architecture. But how can a project team test the architecture itself? This is where software architecture assessment comes to help. It represents the most effective approach for introspecting and assessing software design. And it offers a safety net for software architects.

There are two fundamental types of architecture assessments, quantitative and qualitative ones.

The main benefit of quantitative assessment is that they yield hard facts for the topics under consideration. Quality attributes such as performance are typical examples quantitative methods can handle well. So are internal architectural quality aspects such as coupling or cohesion. Among many possibilities for conducting a quantitative assessment, the methods commonly used are:

  • Prototypes offer a vertical and/or a horizontal slice through the system in order to verify key aspects. For example, they help answer questions like “is the current system architecture capable of meeting quality attributes such as security?”
  • Simulations simulate the environment or missing parts of the system. Thus, it is possible to check whether the software can meet its requirements even in absence of system parts. This is an important approach for developing embedded systems, but it is not constrained to software/hardware integration.
  • Architecture Analysis respectively Code Quality Management (CQM) tools visualize a software system and reveal potential architecture smells, often using static analysis and metrics. For instance, they can detect dependency cycles or hotspots. Continuously tracking a large number of lines of code manually without such tools would be almost impossible.

Unfortunately, quantitative assessments can only analyze partial aspects of the system. A prototype may allow to check usability or the system behavior upon high load scenarios, but not all the other quality attributes. CQM can find dependency cycles, but not the impact of an architecture decision on a quality attribute. When using metrics in architecture analysis tools, architects need to be aware of another pitfall. Metrics must not be interpreted as absolute numbers, but should be always related to the concrete context. Otherwise, they may draw wrong conclusions. For example, a cyclomatic complexity value of 70 may be very bad news for one system, but not expressive or surprising when applied to an Observer pattern instantiation with 70 observers.

In contrast to their quantitative cousins, qualitative assessment methods often integrate many stakeholders of a system. They focus on key aspects of the system architecture and are conducted in a breadth-first manner. Hence, they will evaluate the architecture without considering all details in-depth and their results may need the right interpretation by experienced architects – but so do metrics. If done regularly within a project, they have proven to be very effective in many projects. One of their main benefits is their capability to cover and evaluate many aspects of a system.

The two main categories of qualitative methods comprise experience- and scenario-based reviews.

Experience-based reviews don’t provide any specific tools or forms, but instead rely on the experience of reviewers.

Let’s start small and illustrate the experience-based ADR (Architecture Design Review). Suppose, an architect or developer wants to check the developer habitability of her new small library. For this purpose, she asks peers to act as reviewers. In the first step, she provides all library parts, a small documentation, and a questionaire (or maybe different questionaires) to the reviewers. In each questionaire, she may ask reviewers to solve some tasks using the library and write down how well (or bad) it worked. The reviewers will conduct the evaluation and send feedback to the reviewee who will leverage the feedback for improvement steps. ADR reviews should not take more than one day per reviewer and approximately one day per review for the reviewee herself. As we have learned using ADR as a kind of lightweight example, an experience-based approach is very focused and leverages the experience of reviewers.

In a larger setting, the experience-based Industry Practice method may be applied. In such a review, at least two senior reviewers with the right competence profiles are involved as well as various roles and stakeholders of the architecture to be reviewed. It may take from one day (flash review) to several weeks depending on the goal and scope of the review.

Such a  larger review project comprises four phases:

  • A Kickoff Workshop where reviewers and project stakeholders meet. The reviewers introduce the review method, the software architects illustrate the software architecture, and all participants jointly define the review scope and goal. It is important that the review scope is not too broad. Thus, it should address one specific review goal by analyzing up to 3 topic areas. Examples: Can the software system be turned into a product line and how much effort is required? Does the architecture vision give the product a competitive edge? What technology or architecture option should be chosen in order to meet its usability expectations?
  • Information Collection is the phase, where all available information is collected by the reviewers such as documents, test plans, source code, demos. Further information is retrieved by conducting interviews with the project stakeholders. Each interview is constrained to one hour and one interviewee. Information is kept anonymous in order to establish a trust relationship between reviewers and interviewees. Or as Tom DeMarco mentioned: trust is the bandwidth of communication. The benefit of interviews is that they can disclose all secrets or facts that are usually not documented such as: Why did the architects use design option A instead of B? How well do architects and testers work together? Reviewers should not conduct such interviews without preparation. Instead, it helps to thoroughly create stakeholder or role specific checklists. One common question could be: Could you, please, draw a rough sketch of your architecture. Or another common one: If you had three free wishes for your project, what would you choose?
  • In the Evaluation phase all information is assessed. Strengths, weaknesses, opportunities, and risks are systematically identified by the reviewers. So are all the possible solutions for the weaknesses and risks. Eventually, the reviewers create a review report draft which they send to all involved stakeholders, integrate their feedback, and finally disseminate the final report.
  • A Final Workshop helps to summarize the key findings and discuss suggested countermeasures and open issues.

The Industry Practice approach works very well if the reviewers are experienced. To educate architects for conducting reviews,  a Master and Apprentice model is valuable. Experienced architects and reviewers lead the reviews and teach junior reviewers. If you will, this is training-on-the-job at its best.

The best way for introducing and conducting such assessments is to establish regular evaluation workshops. They can be easily integrated at the end of a sprint or iteration. Architects and designers will review the software architecture focusing on internal qualities such as unavailability of dependency cycles and quality attributes such as performance or modifiability. If they find any architecture smells, they define countermeasures such as refactoring, rewriting, or reengineering activities for getting rid of the weaknesses. Experience shows that this is the recommendable and most effective way, because regular reviews will detect any issues as early as possible before they cause further harm such as an avalanche of accidental complexity and design erosion. If introspection tools such as Odasa or SotoArc are available, detecting internal quality issues is often surprisingly easy. And whenever architects compare quality attributes with architecture decisions in early stages, they will reduce the probability that an important nonfunctional requirement was neglected or designed in a wrong way.

Unfortunately, many architecture reviews are initiated very late. This makes problem detection and architecture refactoring much more complex. Nonetheless, an architecture reviews still is the best  approach to get rid of architectural problems even in this context. It may be late, but it is never too late.

ATAM (Architecture Tradeoff Analysis Method) is probably the best known scenario-based approach. Another scenario-based method called SAAM (Structured Architecture Analysis Method) is more focused on evolutionary aspects of a software architecture, while CBAM (Cost Benefit Analysis Method) is emphasizing business aspects. Since ATAM represents the evolution of SAAM, this article will only address ATAM.

As most architects certainly agree, the factor with the largest impact on software architecture are quality attributes, also known as non-functional requirements. Hence, ATAM is dedicated to evaluate these external qualities. It compares the business drivers and the implied quality attributes with the architecture, in particular with the architecture decisions for implementing the quality attributes.

(Click on the image to enlarge it)

This is done iteratively with different stakeholder groups. As a result, the reviewers identify sensitivity points which represent points in the architecture with impact on one specific quality attribute as well as tradeoff points that influence more than one quality attribute. Furthermore, the analysis yields risks which architects should mitigate by refactoring, rewriting, or reengineering parts of the architecture.

The key tool of ATAM are scenarios. The idea of scenarios is to treat quality attributes in a systematic and consistent way. Instead of defining requirements vaguely such as “high performance of Web access”, the stakeholders need to define a scenario such as “whenever a user accesses the Web shop using a Standard PC and a DSL 3000 line, the page will be available within less than 5 seconds”:

(Click on the image to enlarge it)

Each scenario defines an external actor such as a human user or another machine that triggers an event (user requests page). The system is expected to respond to this trigger such as returning the requested page. The response is attributed with a measure: “this must happen in less than 5 seconds”. In addition, stakeholders might define environmental conditions that may influence the response such as normal versus overload operation.

The main benefit of this approach is that it provides a common language for business stakeholders and architects. Using scenarios, the stakeholders can understand, categorize and prioritize qualities using a utility tree:

(Click on the image to enlarge it)

Scenarios are inserted as the leafs of a quality tree. Their business importance is measured as high, low, medium by the business analysts and the criticality is estimated by architects in the same manner. A pair such as (H, L) implies: high importance for the business, only small efforts for the architectural implementation.

A complete ATAM analysis requires 3-4 workshop days where the evaluation team and the project team will need to spend 30 to 40 person days. The strength of ATAM is its systematic approach and the documentation of architecture decisions and risks thus provided. However, it won’t offer any countermeasures how to deal with the risks, such as the Industry Practice method does.

So, what is the best method to assess a software system? The short answer is “it depends!” The longer version sounds more like this: architects in many problem contexts cannot just focus on one specific approach. Instead, they need a combined approach such as leveraging the Industry Practice method, but enhancing it with scenario-based analyses and quantitative evaluation. If done regularly, problems can be detected early and eliminated in an inexpensive way. Thus, regular architecture assessment should be integrated as mandatory safety nets in the development process and assigned as core responsibility to software and system architects. After all, there is no free lunch with respect or architecture quality. Or as Lenin once said, “trust is good, control is better”.

References

About the Author

Michael Stal is a Principal Engineer at SIEMENS as well as a professor at the University of Groningen. He coaches and mentors customers on software architecture and distributed system technologies for large systems, Michael also has a background in programming paradigms and platforms. At SIEMENS he is trainer in the education programs for (senior) software architects. He co-authored the first two volumes of the book series Pattern-Oriented-Software-Architecture (POSA). Currently, he is experiencing the joy of functional programming and serves as editor-in-chief of the german JavaSPEKTRUM magazine. In his spare time, Michael enjoys running, biking, literature, and digital photography.

 

Rate this Article

Adoption
Style

BT