BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News "Surviving Microservices" with Richard Rodger at microXchg: Messages, Pattern Matching and Failure

"Surviving Microservices" with Richard Rodger at microXchg: Messages, Pattern Matching and Failure

This item in japanese

Bookmarks

At the microXchg 2016 conference, held in Berlin, Germany, Richard Rodger presented “Surviving Microservices”, a practical guide for developers wanting to keep their microservices architectures ‘healthy and performant’. Key topics discussed in the talk included the benefits of message-oriented systems, the use of pattern matching with inter-service communication, dealing with failure, and the introduction of the Seneca.js microservice framework.

InfoQ recently sat down with Rodger, CTO and co-founder at nearForm, and discussed key themes from the talk in more detail. Topics also discussed included the motivation for creating the Seneca.js framework (and where it sits in the current landscape of microservice platforms), handling failure effectively within distributed systems, and the benefits of using message-oriented communication.

Rodger shared several interesting insights: microservices, typically implemented as distributed systems, have some “deliciously evil failure modes”; a core property of microservices should be loosely-coupled composability, without the unneeded complexity inspired by object-oriented languages; and that by searching for and implementing “standards” within the microservice community, we may repeat the mistakes from classical service-oriented architecture (SOA).

InfoQ: Hi Richard, thanks for taking time out to chat to InfoQ. Could you provide a quick overview of your microXchg talk for the readers please?

Rodger: Microservices are not a golden hammer! It is an unfortunate characteristic of our industry that emerging advances in the practice of software engineering often become diluted by "irrational exuberance”. If you actually use microservices in practice, you’ll find that as with all distributed systems, there are some really deliciously evil failure modes. We’ve seen them in practice, so I wanted to talk about our experiences.

It should be said that our experience is also that these failure modes do not detract from the the significant advantages of the microservice architecture. We use microservices because they let us move faster, because they neutralise technical debt. They make the implementation of continuous delivery, and scaling, much easier. And finally, microservices are just much more enjoyable to work with as a developer. There is no more waiting for your entire system to come up to validate a new feature locally - you just restart the relevant service. Your code-build-test iteration cycle is measured in seconds, not minutes.

The talk uses a small demonstration system to explain the ways in which services, and more importantly, the messages between services, interact. You can categorise messages as being synchronous (expecting a response) or asynchronous (not expecting a response). You can categorise service with respect to messages as consumers (acting alone) or observers (allowing others to react to the message). This means that you have four kinds of interaction. You can then ask how these interactions fail. The talk then uses this interaction model to discuss both the failure modes, and more usefully, how you can mitigate them.

Microservice architectures are not as deterministic as classical monolithic architectures so your traditional survival instincts (good testing, validated schemas, project management methodologies) are much less effective. Instead you have to adopt a mindset that accepts failure as a normal everyday occurrence. You build your system to tolerate failure. This has implications for the architecture at all levels, and the talk examines these in the context of each failure mode.

InfoQ: You introduced the Seneca.js microservice framework at the conference. Could you tell us more about the motivations for creating this kind of toolkit?

Rodger: This framework is something we’ve (nearForm.com) been using since 2010! In the last 2 years, it has really taken off as a open source community, and we have very healthy growth, which is great to see. The framework started as a way to enable composition of business logic functionality. What does that mean? Well, software components are very hard to get right. It’s no good providing every setting under the sun, and a large API to cover every case. That’s just more stuff to learn, and more places for bugs to live. Component models that actually work in practice, such as UNIX pipes, work because they enable *composition*. This is the very simple idea that you can build big things out of little things.

The trick is to make it easy for little things to “snap” together. That can only come from a uniform, simple interface model. Modern object-oriented languages are far too complex for this, and all their features are actually a weakness. That is why objects have failed as a component model. The mere existence of the Gang-of-Four book, “Design Patterns” is conclusive proof of that failure. Why would a set of safe constructions be necessary in the first place? The whole point of component composition is that you can plug them together any old way!

I was experimenting with schema-free JSON documents as a message-oriented communication medium for components. The problem I kept running up against was the problem of identity. In order for component A to use component B, A has to know about B. It has to know B’s identity. This is a huge problem, as it creates far too much coupling between components. In the object-oriented world, you need to have a reference to the object you want to call a method on. You can up with things like Dependency Inversion to make this work at the complexity scale needed for large systems.

I took a different approach. Why not use pattern-matching? If a message is something you as a component are interested in, you can tell by looking. Have a pattern (some template of JSON structure) that you can match the message against. It turns out that is it sufficient to be really really simple - just match against the literal values of the top level properties. That is more than enough to build entire systems with. And it gives you a component model that makes composition easy. You can see by the fact that Seneca now has over 200 plugins that connect you to pretty much any database, message bus, service registry, and all sorts of other things. And on top of that you have business logic components for user accounts, projects, even a hypercard data model.

Once we have this model working, the next step was obvious: make it work over the network at scale. To do this we adopted the principle of transport independence. Seneca microservices simply do not care how messages get from A to B. It might he HTTP REST. It might be message bus. It might be publish/subscribe. It just doesn’t matter. From the perspective of the service, you get messages in, and you send messages out, and that is your entire universe. And we assume those messages are always subject to the vagaries of the network.

Seneca does not pretend that messages are local - it assumes failure. For example, there is an internal circuit breaker for repeated messages and message loops. By separating the topology of the network completely from the service implementation, you can get incredible flexibility. For example, Seneca microservices can be turned into a monolith simply by using function calls as the transport mechanism! This turns out to be very handy for local development, and also for those who prefer to start with a monolith. In general, Seneca does not take a position on how microservices should be deployed.

InfoQ: There appears to be a proliferation of microservice frameworks at the moment, particularly within the Go(lang) and JavaScript language space. Could you share your thoughts about this? Do you think clear winners will emerge, or will this process drive the creation of standards?

Rodger: The last thing we need is a standard. We must not repeat the mistakes of the SOA architecture. Microservices don’t need a standard. Think about it. Let’s say I want Seneca to talk to a pure HTTP REST microservice. A little code library that translates Seneca messages into HTTP requests does the job. From the other side, a small bit of code can translate the other way. The same goes for frameworks like Akka, or architectures that use Kafka as a universal message log. It’s not rocket science to translate messages. You must of course apply Postel’s Law - "be liberal in what you accept, and strict in what you emit”. This is why it is so important not to have strict schemas - they really undermine the benefits of the architecture.

Live and let live - that’s pretty much the microservice philosophy. Everybody is welcome to the party.

InfoQ: In your talk you discuss some of the ways in which microservice-based systems can fail (with reference to the eight fallacies of distributed computing). Can you share any horror-stories, and ideally how you addressed the issues, please?

Rodger: No, I can’t share! The reason we build example systems like nodezoo.com is because our clients expect the highest levels of confidentiality. We help them with strategic initiatives, and there’s a lot at stake in competitive markets. This is very frustrating. I am quite envious of developers in consumer-oriented companies like Netflix and Uber, because they can talk directly about their live systems. We just aren’t able to do that.

But let’s talk about memory leaks, in the general case. Everybody has them. They cause many sleepless nights for people running production systems. And they are very hard to identify before you go into production. 100% unit test coverage won’t help you here. The nightmare scenario is one where you are constantly restarting systems that keep failing after a few minutes, and your response latency is giving most users a terrible experience. Your system is not exactly down, but it’s not really up either. And you’re definitely losing money.

With microservices, you can turn this problem on it’s head. Instead of assuming that your services will be long-lived, design them to die often and quickly. Take a high load service with many instances. Give it a half-life of 1 hour say. It;s very important to use randomness by the way - it avoids the introduction of systematic feedback loops that can cause full system failure. The half-life idea is stolen from physics, and means that if you start with 100 services, and wait 1 hour, then you’ll only have 50 left (with a maximum lifetime if you’re nit-picking). This makes you much more immune to memory leaks. It almost doesn’t matter if your code has them, because any individual service won’t last long enough for them to manifest.

For server memory leaks, the microservice architecture also helps. You can spin up lots of instances of the problematic service. This is very different and much less costly than spinning up lists of instances of a monolith. With lots of instances on a short half-life, you can buy yourself time to fix the problem. Usually the immediate fix is just to redeploy an older version of the service that is know to be good, and let the leaking new version die out. Sure, you lose the new feature that you just built, but your system stays up, and you have time to figure out what went wrong. And given that microservices let you easily practice continuous delivery with staged deployment, this is a situation that you’ll be aware of before it affects your entire system anyway.

There’s a great article on software deployment by Zach Holman, previously of github that is relevant here. Although this is written from a monolith perspective, it is easy to see that microservice make the ideas much easier to implement. In particular, feature flags are not necessary, as these map to the deployment state of specific microservices.

InfoQ: You made a strong argument for message-oriented systems in your microXchg talk. Can you describe a little more about the premise, and also provide advice for people who are evaluating this style of communication for microservices (in comparison with, say, REST/RPC)?

Rodger: It’s a pity that it is called the microservice architecture. Messages are first-class citizens, and just as important. Here’s how we go about building microservice systems. We deliver a working system each week, which is an iteration. Each iteration can be described fully by the list of services that we are adding, and the list of services that we are removing. Microservice architectures are very dynamic and responsive to business requirements.

You take those business requirements, and use them to identify business activities. These activities correspond to message interactions. Maybe a single message, maybe a chain of messages. You now have a list of messages. You can then group those messages into logical groupings, and these groupings give you the services. So start with messages, and derive the services from them.

You don’t need to define all the messages at first. And even if you get some messages wrong, you can always use translation to remove them from your system in a organized way. You never end up snookered.

Pattern matching is critical to making this work. It enables translation, it enables multi-version deployments. And if you list the patterns, you get a domain language.

InfoQ: It's been great chatting with you today. Is there anything else you would like to share with the InfoQ readers?

Rodger: If I may, a shameless plug! I’m writing a book, ‘The Tao of Microservices’, that is a deep dive on all the experience that we’ve gained over the last few years building these kinds of systems. It’s in early release with Manning

I am also very happy to debate microservices on twitter: @rjrodger.

The video for Richard Rodger’s microXchg talk, “Surviving Microservices”, can be found on the conference YouTube channel.

Rate this Article

Adoption
Style

BT