Transcript
Roshgrove: Thanks for coming to the last talk of the day. As I mentioned, I'm Gareth Roshgrove, @garethr pretty much everywhere on the internet. I'm a Product Manager at Docker, responsible for a lot of our developer facing tools. If you're using Docker Desktop on Mac and Windows that's one of the things I'm responsible for.
This talk is hopefully not going to be too meta. This is sort of a warning that I might get a bit carried away. I'm going to start with a bunch of history. Not like ancient history, 10 years ago or so, really talking about infrastructure, infrastructure as code, DevOps and some of the things we can learn from that moment in time if you like, that sort of movement.
I'm doing that to set up the next bit where I'll talk more about where does security stand today (and obviously it's the security track so I'll try to get it back onto topic). Really, what are some parallels there? What are some of the things that are different? What are some of the things that are actually quite similar and what can we learn from what happened with the explosion around DevOps and infrastructure as code?
I'll get into a bunch of examples as well, so not all theory and history and meta stuff. I'll show a number of examples of tools that maybe get us somewhere, maybe don't when it comes to actually taking security and making it a more programmatic discipline. We'll look at several tools, we'll look at some of the pros and cons and where they are on that journey. Compare and basically bring back some of the learnings from how other tools have worked. Hopefully that's interesting. If it's not, feel free to leave now.
A Little History
I said going back 10 years or so. I love this quote. I was actually going back through talks I did a long time ago, in internet years, about APIs and that emergence of APIs as the product, as the thing. Suddenly you could interact with computers via programmatic interfaces. This has underpinned an awful lot of things, when you think about it and whether that's cloud computing, whether it's IoT now - the idea that everything should have an API. We don't use the word mashup anymore because it's just what we do all our time now. This was back at that point. APIs are suddenly the realization that they're important. Infrastructure as code as a banner, as a discussion came up around the same time. A lot of this was really applying that idea of “Why can't I have an API for my infrastructure?”
This is happening at the point where yes, cloud computing is there, but it's certainly not where it is today, it's certainly not something anyone is really using. We're talking about our servers and our switches and our physical hardware that definitely doesn't have an API or, at least one that's anywhere near specified. Everything is different, everything is end-user driven.
A lot of the tooling that we've got on here is all about that idea of “give my server an API.” Now, infrastructure as code didn't start with the word and then a bunch of tools came along. People were building things, often quite a bit before this point. But the idea that there's a banner suddenly, that there's a number of people pursuing different approaches to fundamentally the same or similar problems, I think is always a good sign. Getting too tribal about tools can be bad. Getting into a discipline in an area where there are different tools doing different things, can be actually interesting to everyone, even if you end up using just one of those tools.
I could've had more logos but I didn’t want to get carried away, but certainly CFEngine pioneered a bunch of stuff here. There's a whole bunch of history there that I won't get into that I can nerd out on.
A lot of it was driven by these three people. If you ever see any of them, say a thank you. This is Luke Kanies, Adam Jacob, and Matt Burgess. Basically the creator of Puppet, creator of Chef, and the creator of CFEngine. All super interesting people and there were all running servers, in some capacity. They were frustrated sysadmins that built tools to solve their own problems. I think there's a tale there as well.
From Ad-hoc to Software
For those that aren't maybe as familiar, maybe you haven't spent time on the infrastructure's side, it's easy to just skip over to where we are now and go, "Well, of course it's like that. Of course it's software." This was really all about going from some ad hoc approach. Just logging into individual machines that you guarded closely, and didn't let anyone else near and using your favorite text editor of choice or whatever happens to be on that machine and modifying some text files and then running a bunch of commands and remembering to do the 50 things in the right order that yes, you wrote down last time but you're pretty sure that's not the right order, and then giving it to someone else who can't understand how you've written things. Or you're in a world of GUIs. Not discounting the Windows world or Linux, it was all just GUIs. It's all just point and click.
We moved that to software so whether it was something like Puppet, whether it was Chef, whether it was later Ansible, whether it's CFEngine before it, whether it was other tooling fundamentally, we moved it from a world of ad hoc individual end-user driven imperative commands, to describing what we wanted in software. That's the leap. That's the thing that we talk about as infrastructure as code. That's the banner under which lots of different tools to different approaches could solve the same problem.
DSLs and the Configuration Clock
That variety of different solutions, different tools - you can go like, "Oh, why didn't everyone just agree?" Which has never, ever happened in the history of computing but it's still a good question to ask. Who has seen this blog post before or heard about the configuration complexity clock? No one.
It's probably one of my favorite blog posts on the internet. I think it's not just relevant to configuration, but if you're interested in many languages, DSLs, why different programming languages sometimes are the right approach or the wrong approach, and why if you program long enough, you tend to go around in circles about the best solution. At some point you're absolutely convinced that that DSL you wrote is the optimum way of a team configuring or building or writing some software. At some other point in your career you're, "We should've just hardcoded everything." At some point you're, "I have created the perfect configuration file in my XML or INI or JSON or YAML or whatever year it is depending on your configuration file format. Now, we can just do it in data and you go round and round the circle and that's actually why you end up with a bunch of different tools, or one of the reasons why you end up with a bunch of different tools. It's not just all about people. I would say, one of the things that characterize the conversations around infrastructure as code was this exploration of domain-specific languages and configuration and what do you expose as a user interface to ultimately a very large number of people, some of whom are experts and some of whom are not.
Enter DevOps
If infrastructure as code was to a certain degree about the tools, yes, there was some practices stuff there, but it was about applying tools to systems administration, applying programming to systems administration. At the same sort of time and involving quite a lot of the same people, but it's not a complete overlap, came along this banner of DevOps. And very similarly to infrastructure as code, it's not that the word came first and from the word there was all of this stuff about how deployment might work, how we could work better together across different organizational silos. What if all of that was happening and we got a word, we got a banner to have those conversations around?
This isn't a talk about what DevOps means to me, but really that's it - it's a banner under which people have these conversations about practice. Not just technology, not just code, not just the tools, but how they were applied in the context they were applied.
Everyone does like a definition and this is by far, in a way, the best distillation of DevOps from John Willis – culture, automation, measurement and sharing. Apply your thinking, apply your rigor, apply the problems you are trying to solve under those banners and emphasize those as needing answers. How do we share? How do we measure? What do we measure? How do we automate? What do we automate? It gets you actually quite a long way without needing manifestos and the rest of it.
Co-evolution of Tools and Practice
What you observe here is interesting, and this is not just applicable to this specific problem. It just happens to be, I was at the right place at the right time to see these two things. You often have a co-evolution of tools and practices. It's not often enough to have revolutionary tools just come along and people go, "Of course." And just start using them or swap them out for something else. It doesn't really work.
Revolutionary things tend to require some give on the how you are working, how you are applying them. We talk about more generally continuous delivery and deploying every day, or deploying every hour, or deploying every minute, or deploying 2000 times a day. Ten years ago that was like in the talk at the Velocity [Conference], five deployments a day. People were, "Burn them. They're heretics. What are they talking about? This is crazy."
Now I can say, "Yes, people deploy 2,000 times a day." Everyone is like, "Yes, I've heard about it. I've seen someone do it." Sounds a bit pushing it, but we deploy every day. We've moved on to there. Partly that's one of the drivers for a lot of the conversations about we need to structure differently. It's no longer good enough that the operations team is half way around the world and disconnected because us deploying is triggering work for them. We need to work together, otherwise everything is going to be on fire.
Generalizing and thinking about that, when you're thinking about tools also think about what's going to change on the practice side. This wasn't an accident that DevOps and infrastructure as code, and more generally cloud computing as well, came along at the same time. They're co-evolving aspects of the same thing. I guess cloud computing is at the same time. It's nearly a realization of that API driven promise that we started with.
Other People's Computers
It might just be other people's computers but it's really nice that someone else is dealing with that for you behind an API. If you like the tools like Chef and Puppet and CFEngine try to give something that really didn't have an API that you could just reason and deal with and manage via their abstraction. The cloud just started that way, it was just an API. And we say, "Oh, yes, there's servers there." We don't actually know, there might not be, maybe there's magic. It doesn't matter - there's an API and we can reason about that API and we can deal with the consequences of calling it.
Why All the Fuss?
It's fine that there's a bunch of sysadmins that have come up with some crazy tools, and it's fine that actually people worked out ways of deploying every day or working better together. All of these things sound good but why all the fuss? Why were people interested? Why do you get CIOs going, "We must do the DevOps." The reason is that alongside this, the tech geeks geeking out it's had a material impact on organizations and business. At this point, 10 years on, we've got not just good spider sense. It's not just people going, "I'm at Netflix. We do insane things with computers and we talk about it at conferences." That can get you going, "Oh, this is interesting. I hadn't thought about that. Maybe we should try some of that." That's not proof.
At this point there are enough signals, there is enough data, there is enough studies not just by people like myself, on the software vendor side, but from industry analysts, journalists, academics. We've had enough time, we've got enough data to point to things like this from the state of DevOps report. Not everyone might see these things, but even knowing that they're possible, that you can make that sort of leap forward, makes people look up and notice. It’s reasonable to be, "Whoa. There's something here and we're doing it." And we know that.
What Did We Learn?
What are some of the things that we can take out of that and apply back to our security topic in this case? I think these are generalizable for other things as well.
One of the things that happened across all of those tools, was that not everyone needed to be an expert. That is in terms of expert in the tools' usage because this is presumably predominantly a software development audience. You're probably experts in programming and that might mean programming generally, it might be programming specifically in some domain. Turns out that systems people, systems engineers, systems administrators are often pretty generalist, but also specialized. Maybe they're experts in this switch or this networking gear or this type of server. This whole movement of working differently, software eating the world, cloud computing fundamentally changes a lot of things there, but that doesn't make some of their existing expertise go away. It does mean they need new expertise quickly in order to be useful. Scaling organizations is hard and not everyone needs to be an expert, and a lot of the tools embrace that by virtue of this example from the Ansible Galaxy. This is an Ansible module that has been downloaded 1.5 million times. I can't actually remember what it was for, I think it was for Python.
Just the scale of reuse as a way of saying, "Well, yes. An expert wrote this but other people can adopt it" is powerful. And you see a lot of things that are there we go with literally everything now, and you can rate your shopping. The social signal side of thing is interesting.
The Utility of a Marketplace
What are the other things that definitely cropped up and in all of these tools and led to the utility of some of that, like discoverability rather of some of that shared content? Where was the marketplace? Docker Hub is an example or Puppet Forge is an example, the Chef Supermarket, Ansible Galaxy, the TerraForm Module Repository. All of these things are saying, "Let's aggregate things together, let's make them discoverable." There's a utility in a marketplace.
Version Control as Change Control
One of the other things that came up and I think proved nearly the secret sauce in a lot of this, was version control is change control. Source control is something that as programmers we've been used to for arguably a long time, as an industry. Yes, not everyone might be using it today but the majority are. Yes, we have bad tools today. Yes, some of the tools previously used [proved] to be terrible - VisualSource Safe. I have recurring nightmares about time zones in VisualSource Safe that I won't get into here.
Ultimately, I think we see this when we interact with other types of assets. The power that we have in version control systems is a bit crazy. No one knows all of Git, but you know enough to be able to do things that, if you think about Word documents, you go, "Why can't I just do this? Why can't I have the power of version control with spreadsheets and Word documents?" You're starting to see some of those things emerging now.
It's powerful stuff and organizations had - and still have - an insistence, and this is obviously often regulatory, it's often about risk management and an obsession with change control. Unless you can solve change control problems, you're not getting your cool thing into production, however much is going to work. It turns out that, when we realize that version control provides an unbelievably powerful base on which to build change control systems, getting these things into production proved a lot easier than maybe other technologies.
Shared Tooling
One of the things that happened as well around this, wasn't just the tools themselves. It wasn't just that they were standalone things and you took it and you used it. A community emerged that built tooling so the number of things that made Puppet better, the number of things that made Chef better, the number of things that make Ansible better not just in terms of content but tools that actually genuinely make the tools better. My favorite example are the testing ones. As someone who came to infrastructure from software, the idea of writing unit tests for my infrastructure, makes sense to me. Having tools to do that is incredibly powerful and really reinforces why software is the right approach to managing infrastructure, because you can treat it like software. Not just a description language. It's code - I can test it, I can link it, I can have style, and I can have editing support.
Dashboards are a nice, easy example where we're generating data, displaying it in different ways. But the long tail of different shared tooling for different people, for different types of use cases is all of the things that succeeded have that in spades.
The Importance of Community
Underpinning all of that was a community and whether it was local meetups and often, a lot of this came before the companies that were formed around these tools. They were nearly the instigator and fuel for that. Groups of people on IRC at that point, which for people who are young enough in the room is like Slack but all in one place. And this isn't just a bunch of people, this is actually ChefCon I think. This is the community of people that build infrastructure. All of those things are part and parcel of why infrastructure as code succeeded where we are today now.
Parallels with Security
Stepping back going sideways to security. This isn't true everywhere and people are pushing the bullet out, but in general there's a lot more spreadsheets in security at this point than infrastructure. Ten years ago infrastructure was a lot of spreadsheets. If anyone's ever had to do a manual CMDB update of the spreadsheet before you can do anything - that's a lot less common today. Security - paraphrasing a little for comic effect - is a lot of manual processes. It's a lot of people processes, it's a lot of individual gatekeepers and it's a lot of spreadsheets.
Partly that's down to still a very siloed model in many organizations when it comes to security and it's there as a gatekeeper. It's sometimes structurally set up like that. Sometimes it's about funding and - I think often quoted, but needs some proof - an organization might have a 100 developers, 10 operators and 1 security person, that ratio. How accurate that is? I don't know but it feels about right based on my personal experience.
Security often ends up feeling a bit more of a silo, feeling a bit more outside. You can argue that's because of the nature of the work but I'm not sure that's a good enough excuse for the benefits that we've seen elsewhere, where people genuinely work together. And ultimately it shows.
This is information from the DORA Accelerate State of DevOps Report, a large-scale survey and a lot of data into get to the point of categorizing people into low performers and high performers. If you haven't read the Accelerate book and you haven't seen it 10 times on slides, at least at QCon, then you've probably not been here all day. Low performers take weeks to conduct security reviews and complete the changes identified. That was one of the findings there. For very important things, weeks feels like a long time.
A friend of mine, Vincent [Janelle], said, "Most security teams would rather that policy not be published, or it doesn't make sense to open source some things.” This comes to that point of sharing, and sharing is one of the solid fundamental tenants of DevOps. A lot of those characterizations of the tools that succeeded, fundamentally empowered sharing and benefited from it. Where things are kept secret and sensitive, - yes, there are secrets and sensitive things but actually - where there's just a resistance to sharing in general, then well, you tend not to see sharing and you tend not to have the positive impact, the effects that come from it. We are seeing more people talk about security, we are seeing the security community break out of its bubble a little bit, but it's early days. That was true - I think that's the thing here - that was true of the sysadmin community at the point of 10 years ago. I think it's different today.
I said there are people succeeding. We are starting to see the early success stories of people changing how they do security for the better. One of those ideas, that was true in infrastructure, was the importance of thinking in pipelines, and this was happening beforehand in software itself. You tested things and you ran commands and you said, "Yes, that looks good." Then you manually deployed it to production. That's how software was deployed. Today people would be, "Oh, no. I've got a pipeline, we test it there, we deploy it there." We've got that aspect, the workflow aspect in software as well. The same thing happened with infrastructure where it was ad hoc commands. Now it was software I could apply continuous integration patterns, I could apply continuous delivery patterns, because it's software.
This from the upcoming DevSecOps community survey. This is out today. This report is really good, loads of details. Thanks to the Sonatype folks for giving me a sneak peek. The only way to really ensure software security, is to put automated security controls in the pipeline. The only way of doing that is to have those controls in software.
There's loads of good data in here about organizations that have some mature DevOps practices versus none. Remember that we've got good data say if you're a mature DevOps practitioner you're likely to have organizational benefits as well. But this big gap between the “have”-s and the “have not”-s, if you like, around who's using tooling to really embrace security, this is the spreadsheet gap - because I'm not counting spreadsheets as tools. I love spreadsheets but they're the wrong tool sometimes.
There's a load of other bits, I could fill a deck with graphs from that report. Mature DevOps practices - which are aligned with your organization doing well, remember - are hugely more likely to be integrating automated security throughout the process.
Security Automation Is Not New
One thing I want to make sure I avoid, and so I put a slide in specifically, is I'm not saying that security automation doesn't exist, or people aren't already doing it, or that it's all just manual pen testing and there are not tools. There are loads of good tools, and that's been true for a long time, and there are vendors in the space already. That was true in infrastructure. Infrastructure as code was a banner around a set of a tools that took a certain approach, and some of those were 10 years old at that point. There were lots of tools in the space of infrastructure management and Bash is pretty amazing when you consider it. The overall approach had reached a local maxima. I think security automation to a certain point, as we view it today, is similar.
“Elite performance build security in and can conduct security reviews and complete changes in days.” This is the other half of the quote previously. It's possible to be much better than we are, in most cases. We've seen that in infrastructure, we've seen that with DevOps, we're starting to see that in security, but “How do we get there at scale?” is the next question. The question I'd like to see people ask and answer more.
Security as Policy Management
For me, some of that is about taking a singular lens. This isn't to say all of security is this, but I think it's a useful lens to view it through. All of infrastructure in operations is not actually configuration management and managing servers, but what came out of a lot of that movement was, “If I automate a lot of drudge work, I have time to think about higher level problems” and out of that time and space, came a lot of the more modern practices. How do we get time and space in security for people to come up with those practices? We need to get rid of the drudge work. A lot of the drudge work I would argue is by its controls, its policy management.
Policy as Code
How do we get to policy as code? How do we get to a point where we can take all of that drudge work, we can take those controls, we can encode them in software and we can apply software engineering to them, in the same way as we've done for infrastructure? Make time to innovate on the practices.
ModSecurity: Web Application Firewall
That's the setup. I'm going to dig into a few tools that have interesting properties. Who's used ModSecurity ARG previously? ModSecurity is a web application firewall and it's there to filter out attacks. People are trying to launch a SQL injection attack on you, or they're trying to access your WordPress admin pages.
ModSecurity has been around a long time, it's not a new project, by any stretch. ModSecurity 3.0 is the new hotness. It allows you to write rules that protect your applications, it has a DSL for doing so. That DSL looks a bit like this. You see a lot of SecRule and then, luckily for everyone in the room, I'm not going to go over what this is, partly because literally no one can remember. Things date and the terseness of the ModSecurity DSL is a limiting factor, I would argue, it's a high barrier entry to anyone writing new rules.
The reality I found is that people don't like ModSecurity rules. It's possible, totally. You pretty much copy and paste them from the internet or you use a distribution, and there are people who can write these rules, and there are other groups, there are other distributions of rules and probably the most well-known one is the OWASP Core Rule Set. OWASP has a broad charge of security projects - very interesting organization. And they have a set of ModSecurity rules that you can just download, use and protect your applications - back to that sharing concept. That's powerful.
There are some other tools in the ModSecurity ecosystem as well, and some of them are fairly new. ModSecurity has been around a long time in a certain space that it's below the line for a lot of people, but it's there. This is a framework for testing web application firewalls in general, but this allows you to write tests against your ModSecurity rules. The tests end up being a lot more readable than the ModSecurity config, thankfully, but this is a relatively recent project from Fastly and some other folks in the space. There are some ecosystem tools, there's not a lot, I have to say.
There are a few things there that come back to us, like policy as code. How do we have popular tooling that accelerates away? I'm not here to recommend ModSecurity as the answer, and partly because it's not demonstrably moved as onwards. And I think some of the reasons are tied back to “Why did some of the infrastructure rules succeed?” Frankly, writing ModSecurity rules is an exercise in craziness, unless you are a Perl regex hacker. Then this is not for you, it's like mind-blowingly tedious, whatever your opinion on different DSLs or different approaches to configuration, I would argue. Personal opinion, etc.
There is some shared content and there is widespread usage. I would say ModSecurity is actually a useful tool in and of its right. It doesn't have the properties that I'm interested in that I think were instrumental in seeing the infrastructure tools explode in usage.
It's tied to technologies that are maybe more in time and yes, there's lots of Apache. Yes, ModSecurity 3 is becoming easier to use in the context of NGINX, but lots of people are moving on to the next thing, and the next thing, and the next thing. This has nothing to say about container orchestration, or about service mesh, or the conversations that are happening around servers. So, whatever it might be that we're moving to, even if we know that actually having this in place right now for the things we're on, is a good thing.
There are a bunch of other more specific tools to the domain that I won't get into, but ModSecurity was there, it's early. The idea of encoding your policy in code is a good one, but maybe it didn't have the properties to be a more general answer and widely applicable to the types of things that I'm talking about. People use it versus it being generative.
InSpec: Compliance as Code
A more modern, more recent tool is InSpec. Has anyone in the room used InSpec at all? InSpec is a tool aimed at compliance environments. It's actually Ruby, so depending on your opinion of Ruby, you may like it or not, but you've got a full programming language there. It's a way of writing controls in the RSpec testing framework. Here we’re writing a specific rule, we're saying, "This file should match its mask.” We're writing statements of fact that we want our system to adhere to.
This does have some of the properties that we were talking about. This has been extended, and can be extended by anyone to add new types of things that the framework is aware of. There's a whole set of resources. The resource pack it's called for AWS resources. My policy kit doesn't need to just be about bits on disk and packages, and files, and my SSH config, and my firewall rules, and my network rules. It can be about my cloud instances and how they're actually configured and set up at the API level. Ultimately, there's an API somewhere, and I have a bunch of resources, and I want to set some policies about them. I think that's a good pattern, but also InSpec has the ability to be extended.
InSpec is part of this suite of tools from Chef. Chef has the Supermarket and their Supermarket is like Docker Hub or Puppet Forge. It's basically a shared repository of code both from Chef themselves, and from third parties and from other vendors. InSpec has integrated support for that - you can see third-party profiles. People share their profiles, they share their policies. This is what we want to get to.
Just getting down the road, you can see there's things like the CIS benchmarks and a whole bunch of other stuff in there. Some of this is driven by a community that DevOps and security DevSec folks have really gone to town on turning a lot of the good ideas, best practices into software you can run. Rather than issuing a white paper say, "Hey, here's how you should configure your server." they've built InSpec profiles. They've built Puppet and Chef code that you can just use and take to secure your servers. Is this going to match a 100% of what your internal policy is? No. Is it likely to be a good starting point for people who haven't even got to that point of having conversations about what should that be? Absolutely. It's also a great example of how to anchor the policy. Even if you don't use it directly, from a learning perspective, seeing a real code basing, a real implementation is useful. We're starting to see some of the things that we saw on the infrastructure side.
And that makes it easy to use without expertise. I talked about not everyone having to be an expert, and as software developers you're unlikely to become an expert in handling learning machines or a specific compliance regime. You might do, you might choose to do. You might be, "That sounds terrible." Being able to give you code that says, "Well, actually you don't need to be the expert. The code is the expert. Apply this." gives you superpowers. InSpec support is actually just running directly from the third party and this runs a whole bunch of checks. It runs a 100 plus checks, from one command. I install one tool and one command and bang, I've just gone, "Oh, that's a bit of a thing! There's a lot of problems that I didn't know about." Now I can go whack-a-mole. Now I can go fix individual things because it's telling me what's wrong.
InSpec has lots of things to like. There's maybe an argument that Ruby was a moment in time. Ruby's still really popular, but it's maybe not the language that everyone is heading towards at this point. I really like Ruby, but on the other hand language communities are fickle and fashion conscious, and there are more people using other things new now. There's loads of high quality shared content. The Supermarket acting as a central repository works really well. I would argue that you need a certain level of expertise to get involved here. There aren't tools for non-programmers, and I think the barrier to entry there, for people coming from the security side, is a lot higher than it will be for people who are probably coming to QCon and interested, from the software development side. InSpec's an interesting tool. I'd love to see it succeed, I think there's maybe some barriers in its way.
Open Policy Agent
Last but not least, I want us to talk about Open Policy Agent. Who in the room's come across Open Policy Agent before? This is mainly popular at the moment in the Kubernetes community. I'd argue it's more generally applicable than that. Open Policy Agent allows you to express policies in a high level declarative language that promotes safe, fine grained logic. Ultimately it's a DSL for describing policy about structured data.
We can write rules in Rego which is basically the DSL for these uses. It's a separate language that has pros and cons, back to that idea of the configuration complexity clock. I was going to do some demos and had to swap my laptop at the last minute, but this is a simple policy for TerraForm code. TerraForm can be compiled down to if I say JSON or YAML or whatever it might be and we can apply this policy. This policy is going to count all of the resources, it's going to iterate over the resources. It's going to grab all of those that start with AWS IAM - basically they're IAM rules - and it's going to say, "Deny." If you're trying to change any IAM rules it's going to go, "No."
You can imagine the other types of things you can do over any type of structured data. A common example in the Kubernetes community would be blocking images from other repositories. You only want images coming from your internal repository, rather than random place on the internet or untrusted source. That's easy to encode in a policy in Rego and applying it with OPA.
You might be writing tests for home chart. I think that's a really good use case. I hear this is actually a nice, simple example and all we're saying is, "Well, it's [inaudible 00:44:03] type deployment and it's not got the security contacts run as root set to true."
We've set up a nice policy and there's a Helm plugin that allows you to run those policies against your Helm chart deployments. Helm is a package manager for Kubernetes. All these things are just structured data and OPA is a way of writing policies against structured data, and it's got a testing framework built in. You can apply it just simply on the command line. You can run it as a daemon to act as a live filtering proxy for off the connections. It's a really powerful tool set.
It's pretty new, which is one of the reasons why I wanted to talk about it. You could do a whole talk on Open Policy Agent. I think you'll probably start seeing those types of things at conferences, hopefully. Modern sensibilities, I think, are a good sign. I'm a fan of the DSL approach, not everyone will be but that's fine. I like a good DSL.
Built-in tools for testing and it's widely applicable to different problems. I have, a bunch of times, written proxies to put in the middle of places to say, "Look. I want to filter out certain things. I only want things to go through here that meet a certain schema" for example. Open Policy Agent is a generic framework for doing that type of thing. Any time you're passing structured data around and you want to apply policy, OPA is a good way of doing it.
It's new, which isn't a good [inaudible 00:45:40] sign but there are limited examples outside the use of Kubernetes. There are examples, but not real cases where I've seen it applied. There's no built-in sharing or central repository yet. I keep bugging folks and maybe it'll happen one day but the sharing story isn't there yet, it's new. I like OPA and I'd like to see it succeed so please have a look at those.
Conclusions
Rounding up. I think, with security tooling, we're in this point where we're definitely not across the chasm yet. Ultimately, not everyone adopts all the same software at the same time. You do end up with most people adopting certain things that become popular enough that they hit the general audience, but lots of things don't get across this maybe imagined gap. We're definitely in an early market situation here, nothing's really breaking out, nothing's really over that line. I think that says a lot about lots of things, not just about the individual tools. It's about security and structures and how we incentivize people to work together, and a lot of the things that we're seeing being broken down by the DevOps conversation.
I don't think we're across there yet. A good example of that is this is public content on GitHub. It's not a perfect proxy, partly down to that idea of “security people might not want to share”. I can tell you sysadmins really did not want to share, and some of them still don't. I don't think that's the argument for the discrepancy here, but you see this one and a half million probably manifests public on GitHub - that's a lot. I've done some data mining on that sample, it's good fun.
More than a million DOC files, hundreds of thousands of composed files, tens of thousands of Helm charts - that's not indicative of anything. Those numbers are not comparative, but those are large numbers for the types of things they're trying to describe. Actually, all the things I just talked about you don't see that much public content. That means it's not super visible, it means it's not easy to learn from it, it means it's not easy to share.
I do think this is a powerful idea. Unfortunately, if you came along expecting, "And here's how to do everything today." I don't think we're there yet. I don't think we have the tools. You can do this, you're going to write it yourself. However, I think we can get to the place where we do have tools, we do have a general approach.
If you are building tools, then build for a community. First and foremost, make it possible for people to get involved. That's the only way you break out, and that's true inside your organization as well as, if you're trying to build for a wider audience.
Adam [Jacob] is up on the sustainable, free and open source community. There's loads of good guidance about that.
If you're using it, build what you're doing for sharing. Try and share by default, rather than get permission after the fact when it's become powerful and a big thing. And it might not be the actual policy, it might be how you did something. It might be getting up and talking at QCon, it might be blog posts. It might be whatever it might be.
Put it in your context, in your organization and it's easy when you look and go, "Wow. These people are in charge of this big open source project." Actually a lot of them just started like you, inside an organization fixing their own problem, and the approach they took became the thing that you see in hindsight.
See more presentations with transcripts