We all know the importance of washing our hands to prevent diseases from spreading, but when it comes to application security, often we instead try to add it as an afterthought. We have learnt how to add test into the development workflows, but with security we often assume someone else will come and fix it later on, Sam Newman claimed at the recent Microservices Conference in London, in his keynote on security in a microservices context.
For Newman, working at Atomist, microservices are formed as hexagons and have names that correspond to their business responsibilities. They are also autonomous and he specifically notes that this autonomy primarily comes through independent deployability.
With a monolithic system, we often have one perimeter or boundary, and one database to protect. If someone breaks into such a system because of a security breach or attack, probably everything within the system will be available to the attacker. In a microservices based system, and with proper security, we are limiting the privileges an attacker can get and the amount of data available in a single attack breaching one service. But we also get a larger attack surface and more servers that can be attacked. Method calls that were inside a single monolithic process are now network calls to exposed APIs. With many servers and manual patching the risk of forgetting to rollout patches to one server increases.
Often we don’t think rationally when we see an exploit or potential attack vector. Commonly we then optimize to prevent the use of that exploit, when we instead should take a step back and look at this holistically, which means that we often spend money on the wrong things while leaving wide open vulnerabilities in our systems.
Instead, we should take a threat modelling approach and break down the process of how you think rationally about where you should focus your energy around prevention. Two examples are Attack trees described by Bruce Schneider, and Microsoft security development lifecycle with the stride and dread threat modelling technique they use.
One simple step towards an increased security is to start using HTTPS everywhere, including internal traffic. This will give a guarantee that the payload has not been tampered with and that the server is the correct one. Let’s encrypt is a free and automated certificate authority trying to reduce the hassle of getting https certificates everywhere on the public web and Newman notes that their most important feature is that it’s automated. To verify the client for the server a client certificate is needed, but this is often a burden to manage.
For Newman, Docker is a great technology but notes that many official and trusted images have critical vulnerabilities, which means that when you install one of them, you will also have these vulnerabilities. He therefore highly recommends using tools like clair that help by doing static analysis of vulnerabilities, and frequent patching.
Detection, or knowing that something bad has happened, can be valuable to prevent new attacks, but finding new vulnerabilities in your running servers is also important. Often attacks leave traces in the logs, and Newman notes that getting access to all logs in one central location is one of the very first things you should do, both from a security and from an application development point of view.
Besides prevention and detection Newman notes the importance of also thinking about response and recovery. How do you respond to a security breach and communicate the issues with your customers and how do you recover if your system has been corrupt? Recover from backups is much harder when data is spread over a number of microservices.
Next year’s Microservices Conference in London is scheduled for November 6-7, 2017.