Key Takeaways
- Actively engage with security teams when crafting your delivery pipeline
- “Many eyes on code” doesn’t inherently make open-source software more secure
- The “edge” secured by data center security products expands in the cloud, and changes your approach.
- Don’t forget to consider build-and-deploy security when working with PaaS or serverless environments.
- Hear Cheslock’s full take at the upcoming Agile Alliance Technical Conference
Does your approach to application and data center security change when adopting cloud services? To learn more about this topic, InfoQ reached out to Pete Cheslock, head of operations and support teams at Threat Stack.
Cheslock is a presenter at the upcoming Agile Alliance Technical Conference (AATC) held in Boston from April 19-21. His talk, “Security at Scale: Building a Security-First Technical Organization” looks at the impact of cloud –aaS platforms on your architecture and teams.
InfoQ: Have the on-demand, self-service capabilities of cloud IaaS forced a change an organization's security approach? If so, how?
Cheslock: Absolutely. Before, security teams could tie themselves into the procurement process to ensure that all systems and applications would meet standards before being deployed and consumed by customers. But now, anyone at the company can go and create an account for a cloud provider and start deploying their services. Commonly referred to as "shadow IT" - we've seen countless companies where they will have dozens or hundreds of AWS accounts as a way to get access to computing resources their internal technical teams are unable to help them with. Now devs and even ops team can circumvent security policies and leave their security teams behind to clean up the mess.
InfoQ: In your upcoming talk at AATC, you plan on covering how companies build security into their process from the start. Does that practice look different if your company has a dedicated information security team already?
Cheslock: In many ways, the steps to bring security into an existing DevOps process at a small company can work at a big company as well. The goal is working with your security teams to involve them into the tooling and workflows your dev and ops teams are already using (like continuous integration and delivery). Another goal for companies that don’t have a dedicated security team or have no idea where to start, is to start small, and not attempt to boil the ocean. In my talk at AATC, I'll be talking about things as simple as continually monitoring your SSL certs all the way up to continuous security monitoring and alerting.
InfoQ: What are areas where technical staff take the cloud for granted and assume that their workloads are inherently secure?
Cheslock: Many times, people think that using open source means their code is secure. Assuming that "many eyes makes secure code". As I’ll detail in my AATC talk, that concept has been proven wrong time and time again with the many vulnerabilities in core pieces of open source technology. Closed sourced vendors are no safer, as we've seen many news reports of critical vulnerabilities in closed sourced software and hardware as well. Since no one is safe, I'll go thru some steps leveraging MITRE and the National Vulnerability Database to help people understand their risk, help them prioritize their security updates, and finally talk about the risk and reward of nightly security updates and what it means to your business.
InfoQ: Conversely, are there native cloud capabilities that companies *should* use that relieve them of a particular aspect of security?
Cheslock: The great news is that cloud providers like AWS, are doing great things in the security space to help their users understand better what is going on. If you are running on AWS, you can get tools such as Cloudtrail to audit all the API calls on your account, you can use AWS Config in order to audit your systems and ensure they meet your compliance rules. Finally, you leverage the EC2-VPC (which is default for all new AWS customers) to segment your systems behind private non-routable networks and use network ACL's to restrict access. In many cases, the tools are there to be more secure running in the cloud, users just need to learn what they all are.
InfoQ: Should companies wean themselves off "perimeter security" as a dominant application security strategy, and if so, how?
Cheslock: When moving to the cloud, you can still attempt to maintain a "perimeter" my routing all your traffic thru your internal data centers, but eventually you may move off all your internal data centers entirely so you have to be prepared. The reality is that many companies use "endpoint" security tools to track and monitor their users’ laptops and mobile devices. We need to take the same tactic with our servers as well. Every server running code is an endpoint, and enabling a continuous security monitoring tool can help you track and identify more than just 0 day exploits. They can help you identify internal bad actors who might be stealing intellectual property using valid credentials. The risks to a company come from inside often as much as from remote attackers.
InfoQ: What security aspects should NOT be lifted-and-shifted from on-premises to a cloud environment?
Cheslock: We mentioned this a bit in the previous section, but "edge based" network security monitoring tools. The challenge you can get into is in the cloud every one of your systems could be acting as an endpoint, and the "edge" of the network could be wider than you would normally experience in a single datacenter. The challenge comes into play when companies go multi-cloud, now you need to deal with disparate networks and providers that in many ways don’t natively integrate.
InfoQ: Do your security recommendations change if someone moves "up the stack" from bare metal or virtual cloud servers up to PaaS or functions?
Cheslock: Providers such as Heroku, Google Cloud Functions and AWS Lambda really make the concept of securing your systems more interesting when you don’t have any servers to run your code on. These are often referred to as "serverless" - your code executes inside a provider on systems that you likely don’t have any control over. In many ways, this can help make you more secure as you are reducing the number of endpoints you need to secure. But in the end this pushes your security challenges over to the provider themselves. AWS uses their Identity and Access Management (IAM), meaning you are now in full control of providing access to your functions. You need to ensure the security is as least-privilege as possible. Additionally, your code needs to get to the provider somehow, which means you'll be running systems that do the continuous integration and deployment - that is where adding in security testing and static code analysis tool at the build and deployment side.
InfoQ: How does data security—including access control, encryption at rest and in transit, change data logging—look different in an -aaS world?
Cheslock: Often, when companies run inside their own data centers on networks they "own", they will run many of their services likely not thinking about encryption. This assumes that physical security can help them beat this problem. When moving to the cloud, and shared infrastructure, encrypting your data at rest can be critically important to ensure data is not leaking due to bugs or errors with the provider. Additionally, when running on someone else’s infrastructure, you need to ensure you are using TLS to secure your services internally, similarly to how we run SSL/TLS on our external websites. Many tools make this very easy to do nowadays, and I'll be talking more about how I've done it in the past, at AATC.
InfoQ: Is it a false choice to decide between "speed" and "safety"?
Cheslock: In the past, we used to have to choose between speed of deployment and the safety of deploying and managing that code. That was the pre-DevOps conversations. How can we both move fast and stay highly available. Well, in many ways those are now considered "solved problems". We are now at the point we're asking the same questions in the DevOps and Security spaces. I think using many of the tools and technologies that enabled Devs and Ops to work better together, we can tie security into those same processes and procedures and move quickly while increasing our security posture.
About the Interviewee
As the head of Threat Stack's operations and support teams, Pete is focused on delivering the highest level of service, reliability, and customer satisfaction to Threat Stacks growing user base. An industry veteran with nearly 20 years' experience in Technical Operations, Pete understands the challenges and and issues faced by security, development and operations professionals everyday and how we can help. Prior to Threat Stack, Pete held senior positions at Dyn and Sonian where he built, managed and developed automation and release engineering teams and projects.