BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Set Piece Strategy: Tackling Complexity in Serverless Applications

The Set Piece Strategy: Tackling Complexity in Serverless Applications

Key Takeaways

  • Decompose complexity: Break down issues into parts to effectively address each one.
  • Develop sustainable applications by leveraging the features offered by serverless technology, such as optimization, robust availability, and scalability.
  • Adopt Domain-Driven Design and a microservices-based architecture: These techniques foster team independence and streamline development processes.
  • Incorporate best practices for software delivery into serverless development by emphasizing modularity, efficiency, and observability.
  • Encourage Team Autonomy: Empower teams with autonomy by equipping them with the tools and knowledge to manage their microservices independently.

Most of you should be familiar with the movie Mamma Mia! Here We Go Again. There are so many things in this movie to entertain us: vibrant colors, locations, sun, water, an all-star cast, etc. If you think of moviemaking, it has many stages to go through. Everything seems simple to us, but someone needs to develop a story, write a script, find the producer, bring a director on board, find the stars, location, costumes, etc. It’s a complicated process.

When it is packaged together, we could call it a monolith. However, a movie is not just one big blob; first, there is an introduction. Often, there is an interval, hyped up by the story built before, in a manner that leaves you hanging on the suspense. Then there are the credits. At this point, the movie has been broken into a few parts. Then, within each part are hundreds of scenes, simple and complex, all knitted together to bring us the entire movie experience.

Complexity is everywhere, not just moviemaking. It’s in life; it’s in software engineering as well. It is a fact. And the way we tackle this complexity is essential. In the book The Philosophy of Software Design, the author states that the fundamental problem in engineering is problem decomposition, which is how we divide a problem into pieces. This is so true everywhere. Regarding the film, for example, we have a vision of the entire thing. Then, we break it into different parts so we can focus on each.

I usually use this analogy: let’s say that you watch the night sky. It’s a blanket of dots—that’s it. No matter how often you look at it, you still get the same picture. Now, get a telescope and zoom into one bright dot. What you see is a blur at first, then a galaxy. You keep going, and you find suns and star patterns behind that galaxy. Then, a planet, a cloud formation, and a landscape at some point. This is the way engineers should approach a complex problem. They need to know how to enter the problem, see the big picture first, and then keep going.

Set Pieces in Software Delivery

Usually, when planning a movie, the director identifies areas of the film called set pieces. A car chase, a loud sequence, or a long drive are some examples. They identify these parts of the movie so they can plan and film accordingly. They can do the filming rehearsal, similar to what we do in testing. This is the concept behind set pieces. Why does it matter? Because it has specific characteristics that we can apply to engineering. A set piece is a part of the whole picture.

Similarly, in software engineering, you take part of a big use case, focusing on something you can manage. Then, you can plan, rehearse, or test each part. Finally, you bring everything together to make the whole.

This approach is not specific to software engineering or serverless architectures. However, there are three reasons why we can use this approach to improve serverless applications. First, the characteristics of serverless technology allow us to do that. Second, we can use proven and familiar industry patterns and practices. Finally, we can consider application sustainability—I’ll discuss it later.

Serverless Characteristics

Let’s take a deeper look into serverless characteristics. It’s a cloud computing model, part of the cloud setup. There is no server management. We pay for computing and storage, autoscales, and high availability. The service provider takes care of these things, so you don’t need to consider them. It is an ecosystem of managed services, so we can optimize things at a granular level when architecting a serverless application. This is also why we can iteratively and incrementally develop our applications. At the same time, this ecosystem brings diversity into a team. Teams used to be a few engineers doing programming. Serverless architectures changed that dynamic because programming is only part of deploying a serverless application. You need to know how to knit the services together (infrastructure as code), provision a database table, manage queues, and set up your API authentication. No individual experts are involved; it’s all part of the engineer’s day-to-day job. That’s why it brings diversity into a team with different skills.

Besides granularly and individually optimizing a serverless application (API quotas, database scaling, memory allocation, function timeouts, etc.), we can also optimize it at depth—which, in this context, means optimizing the application considering the relative importance of its functionalities. Let’s take the three data flow pipelines above as an example. Say some data gets dropped into the source and goes through the pipeline. At the top, you see price changes data, and at the bottom, product reviews. Price changes are critical data, so you want that data flowing quickly. Product reviews, however, don’t need to appear for a day or two or even a week. That means in this architecture, you can adjust the resources you consume and architect to gain the cost—which translates into sustainability.

Domain-Driven Design and Microservices

Let’s look at domain-driven design and microservices. With the advent of DDD, we started splitting our organization into domains and subdomains, breaking it down for more visibility and control. With that, we now had boundaries, or the bounded context. Guarding boundaries is the most crucial aspect for a serverless team or organization to successfully develop with serverless technologies.

When discussing boundaries, we also need to discuss team topologies: the structure of different teams, like stream-aligned teams or platform teams. If we focus on stream-aligned teams so we have a boundary, we can now assign a team to guard that boundary. They are the custodians of the bounded context. We break down the organization into domains and subdomains. We identify the boundary, where, according to DDD, ubiquitous language, the common language is spoken, and we now have a domain model. As a team, we are responsible for protecting the domain model. Who takes over from here? Microservices. Because the team can now build microservices and applications that reside within their boundaries. We will see how they interact later on.

This is why it’s essential, whether we use serverless or not, to capitalize on the proven practices and patterns in the industry as they evolve, to make use of them, and to get benefits. DDD came in 20-odd years ago. Microservices came later. Team topologies, just recently. We can still bring everything together and work harmoniously to make things happen. Domains, team autonomy, boundaries, microservices, contracts—these things should be in the mind of everyone who architects serverless applications.

Sustain Your Applications

Let’s talk about serverless application sustainability. When we talk about sustainability, most people think about green initiatives. Sustainability, as a definition, is very generic. We keep it going with a little nourishment or nutrition so it doesn’t die off. This is precisely the principle that we apply when it comes to our planet. We want us to keep going for our future generations. But how does it relate to serverless or software engineering? Let’s go back to the old way of waterfall model, which I’m sure many of you must have come across. Typically, it starts with the requirements and then continues with the different siloed phases, often taking weeks, months, or maybe years to complete. After the application release, it gets pushed into some maintenance mode.

Let’s think differently when it comes to serverless, and more specifically to what I call sustaining a serverless application—you start with an idea, design your application, build it, deploy it to the cloud, and then look after it. But it’s not finished yet; you must keep it going. You start with a minimum viable product, but your goal is to make it the most valuable product. For that, you need iteration. You need to iterate. When you do that, what you’re doing is sustaining your product. That is the different meaning of sustainability in our context.

The cloud is basically composed of three things: computing, storage, and networking. The "serverless" part is already in the picture because it’s part of the cloud. In serverless development, we use the cloud to build products using serverless technologies, using the processes to allow us to operate in the cloud successfully. This is what I call a sustainability triangle in serverless.

We have the products, the processes, and the cloud, forming a sustainability triangle. In this triangle, Processes are the processes that allow us to deploy our products sustainably and operate sustainably in the cloud. And while a sustainable product can mean many things, it has three essential aspects: modularity, extensibility, and observability. These aspects are also interdependent. For example, if we have a modular product, it can likely be extended. Then, if we have better visibility of what’s happening in our modular service, we can sustain it longer. That’s the mindset we need to have when we work with serverless development or the services we build.

Sustainable processes could be many things. As the mindset of the people or the developers or engineers behind the development, we use the processes and the cloud as their operating platform to gain the advantage when sustaining the products and operating them sustainably. There are three different ways of looking at things or three different aspects of sustainability. These aspects should be kept in mind when it comes to architecting because the operating environment is cloud and how we operate. That’s where the cloud aspect comes in. Some of the processes, for example, are enhanced sustainability and lean principles. Then, being pragmatic with the iterative or agile development, starting with something small, using the MVP mindset, and moving forward. This is the typical agile cycle. Then automation, having the DevOps mindset, and continuous refactoring.

With modern technologies, cloud providers release services and features daily. That means we can’t stand still after building an application, so we should be able to continuously evaluate, refactor, and improve things for the future. We are enhancing or sustaining as we go.

Something I always recommend to engineers is to architect the solutions. This is very important, especially in that serverless landscape. Sustainability in the cloud is a shared part of serverless architecture. All cloud providers come with certain sustainability aspects. As customers or consumers, we are responsible to architect our solutions to gain the benefits of sustainability and have the contribution going via the provider to the wider world. This is, again, an essential aspect of architecting serverless applications.

Set Piece in Practice

Let’s put everything into practice. Take a small reward system as an example. You go to an e-commerce website; you have rewards, vouchers, or codes you want to redeem. The website uses a content management system to load the reward data. It typically has a backend service to validate the code and make the redemption. Then, there may be a third-party application where some data is stored as a ledger. Let’s say those two are third parties, and we don’t focus on them too much. Our domain, e-commerce, could be different in your cases. Let’s pretend, for argument’s sake, that the subdomain is the customer, and we have a bounded context that’s important: rewards. That’s where the architecture diagram comes in.

A traditional microservices approach usually considers one bounded context and a big monolithic microservice, primarily because of containerization. However, with all the characteristics of serverless that we saw earlier, we can think differently. With the traditional versus microservices, when it comes to this scale, we often need to consider if a particular piece of the application or service changes a lot. For example, in reward redemption, business logic changes frequently because business rules change. So why should we deploy the entire thing every time if it’s just small, with one part changing?

This is where we can probably introduce some of the thinking of identifying the pieces. For example, let’s leave a few of these things out and look for areas we can decouple and build as separate pieces. For example, find core services like the backend service. Then, let’s identify the data flows. Identify those areas so they can be developed as separate microservices and have different interaction patterns with others in the system. That is one way of looking at the problem. Then, the anti-corruption layer; these are the protective measures to guard your domain model. Suppose the CMS data model is different from the rewards-bounded context model. In that case, the ACL does the transformation, translation, and push of the data so that if you replay CMS or even the CRM, you don’t need to do too much to make the changes within the core model.

How do we piece these things together? We have a bounded context, and then we put some microservices in place. These are all smaller microservices, and they all connect to each other. But how do they connect? This is where engineers usually struggle. If we look back at the filmmaking process, how do we combine hundreds of scenes and sequences of scenes? This is mainly done with dialogue and background music carrying over from one scene to the next. What do we have in the world of microservices? You know the answer: APIs, events, and messages. This is why it is still possible when you break these things into different pieces; the system works beautifully as one application.

If we add these aspects, then we can redraw the application diagram as above. We identify the synchronous API invocation paths, and where we can, use asynchronous or event-driven communication. These are some ways of thinking about architecture when dealing with serverless applications and taking advantage of its characteristics and patterns.

Serverless Microservices Approach

Typically, this is how your rewards system will look in a serverless world. The important thing to notice is that all microservices exist inside your bounded context. They don’t cross the boundaries. That’s where communication and contracts come in. Then, you can have independent deployment pipelines going happily to production without impacting anything else. This is the power of breaking things down and making them more manageable for everyone, including engineers and architects. For that, we need an autonomous team. They own the microservices within the bounded context. That’s important; that’s the ownership. Everything that happens is their responsibility. You need microservices to deal with reports or data generation.

You need microservices to send emails to customers, receive the feedback, etc. These are the areas that we can easily decouple. When we build our application, we don’t need to start with all these things simultaneously. Email can come in later, or you can do the report generation once you know what data this bounded context deals with. Then, of course, it’s an autonomous team that operates in their cloud account. This is important. I think many organizations are still going through this phase, but not many organizations have achieved this. This is crucial for the velocity and flow of the team. They have their own account and their own repository. They don’t deal with anything outside the boundaries. If you want to talk to their services, there is an API, the event flows, the event broker, or the common event bus. That is what we aim to build and architect with serverless.

In Summary

When we look at application architecture through the serverless lens, we must think about its unique aspects. Take advantage of the serverless architecture characteristics; make use of them. Use the architectural patterns. Don’t be shy about introducing anti-corruption layers or microservices to other engineers around you. Let them learn. More importantly, encourage team autonomy.

A couple of months ago, there was an engineer who took over a particular piece of new work. He was going to create an architecture diagram. He had no clue how to tackle it, so he started drawing APIs and things. I asked, "How do you know you need an API here?" He said the system has an API—that’s that system. I replied, "Why don’t you start with something like domain storytelling? Then, you draw the picture as a storyboard. Domain storytelling is a book you can follow. It’s nice to envision it in that way. Then you explain it to everyone, stakeholders. If you see something good for the feature or the service that you’re building, you can slowly think about the design and architecture." Challenge engineers to confront complexity. Feed them all the sound patterns and practices.

About the Author

Rate this Article

Adoption
Style

BT