Transcript
Costlow: My name is Erik Costlow. I'm one of the members of the editorial staff over at InfoQ. I focus a lot of my technical writing in the Java Queue, and as well as picking up various different security articles. You'll see me covering things like JEP 411, the deprecation of the security manager, all the way to different exploits, for example, against the ESP32, a common chipset in many of the devices probably in your house.
We're going to talk about DevSecOps and application security, different techniques and strategies that people use along the path of DevOps. Because the aim of DevOps is to ship quality code very quickly. Get it into production and get a feedback loop. The symbol that we'll be working on is that standard infinity symbol that you look at every time you see information about DevOps across the cycle. That's the path that we're going to be following of build code, integrate it, deploy it, monitor it, and bring it back for planning.
Ultimately, when I write news for InfoQ, there's four different types of articles that I tend to write. One is, something was hacked, the other is someone was hacked. I really want to write less of those articles in the future, which I hope is one of the reasons that we're doing this panel. The other two that I actually like writing is when something interesting is happening that relates to security, for example, the deprecation of the security manager was one. Also, beneficial tools that helps teams improve.
In terms of the panel structure, the way that we've brought the guests on is to focus on the concept of journey leaders and destination leaders. Journey leaders are people who can focus different security techniques across the entire path of DevOps, just in terms of integrating security throughout a cycle. A destination leader is someone who is able to spend the majority of their time highly focused on a very concrete area of the cycle in terms of ensuring that security is done very well at that particular location in the DevOps cycle.
We have four panelists joining me. There's Anastasiia Voitova, a destination and journey leader focused on securing applications across the SDLC with a heavy focus on cryptography. We have Rajiv Kapoor, a journey leader with NGINX, focused on security transformations and production monitoring. We also have Clint Gibler, a destination leader focused on secure production and analysis of code. As well as Andre Tehrani, who is a journey and guide starter, actually a recruiter who's able to help developers realize career opportunities and structure so that we can understand this semi-parallel market to software development.
Voitova: My name is Anastasiia. I work at Cossack Labs, which is a British/Ukrainian data security company. Basically, we do all the things related to data security, all different kinds of encryption. We have open source tools, proprietary tools, cryptographic research. All the things solve the same problem, how to secure data in a way that it can be linked, but it's encrypted, so kind of fun. Here, I will talk a lot about encryption, and how basically encryption affects the applications and infrastructures, and why it's invasive security control.
Kapoor: My name is Rajiv Kapoor. I work here at F5 at the NGINX group. I'm responsible for taking NGINX security solutions to market. Personally, I'm really passionate about how security is looked upon not just as a tool or a practice, but also a shared responsibility across the organization for making sure that applications are reliable, and customers have a great experience on them.
Gibler: My name is Clint Gibler. I work at r2c, a SF, Bay Area startup, building an open source static analysis tool called Semgrep. We're aiming to build something that helps you develop software faster and more securely. I also run a security newsletter called tl;dr sec or tldrsec.com, where I take the latest and greatest talks, blog posts, tools, and condense it into one nice, easily consumable place.
Tehrani: I'm Andre Tehrani, managing partner at Recrewmint, Inc. We basically are an executive search firm. We help organizations fulfill human capital gaps within their cybersecurity, software engineering, and crisis and resiliency portfolios. Looking forward to speak with you all about how you can set up your career tracks, any questions you have regarding certifications, careers, or the security perspective when it comes to software development.
Difference between DevOps and DevSecOps
Costlow: The first thing that I want to start out as to make sure that everybody is on the exact same term, because we have the DevOps lifecycle, which is that typical infinity chart that we've seen. The first thing that I want to do is to just cover, what is the difference between DevOps and DevSecOps?
Kapoor: I don't know if I want to take a stance on there is a difference between the two words, DevOps and DevSecOps. I think for me, it's more of a mindset that both imply, which is automating security from the get-go. Having it as an integrated approach as part of the development process. I think that, earlier on, DevOps implied that, especially as the articles were coming out, but now people have decided to add security as part of the term so that it's more explicit. This is about security being integrated into the application development lifecycle. I wouldn't worry too much about the differences between the two terms as much of the ethos behind them, which is having both security and application development teams working in harmony together.
Is Security Necessary?
Costlow: As I looked up ways to approach different questions about this, there was an interesting blog post by AWS, where just like the industry uses the term DevSecOps, I've also heard the terms DevFinOps about finance, especially with the cloud. AWS had posted a blog up that was titled, "Introducing FinOps, excuse me, DevSecFinBizOps," where they just shoved a bunch of acronyms into the middle. There's a lot of terms that we can shove into the middle there, but are they really necessary? Because you can't do cloud without money, but can you do things without security?
Voitova: It might be an unpopular opinion, but of course, you can do things without security. The question is, is it worth it? What are the consequences? I really like the idea of explaining security as building a house. You can build a house without gates, without fences, without fire extinguishers, and things like that, but what are the consequences? Putting a fire extinguisher in your house won't prevent from fire, but it helps you, it gives you an opportunity to continue living with your life after fire happens because you can mitigate the fire. Same with security. It's totally fine for some projects not to have security from the early days. That's ok. In fact, we've seen a lot of companies that start caring about security only later, at maturity stages, for example, Zoom. Remember, previously, all these security incidents happening. Zoom was in a pretty good place, I will put it this way, because they have enough money and maturity to basically hire a security company to improve security. Not everyone is so lucky. Many companies just don't live to this moment.
We can talk about risk management here. Instead of fire extinguishers, maybe you want to have nets against insects, or maybe a guard dog. This is all about risk management. What industry are you in? What data your applications operate. What are regulations and standards for your industry and these data? What are business risks related to you losing this data, to someone attacking you, to an insider job? Typical risk management process. After, calculate the risks. There are plenty of frameworks on how to do that. For hardcore organizations, I would start with NIST Special Publication 800‑37, about risk management. This is NIST. These are for mature companies. Two simplified versions, like FAIR, which is a quantitative assessment risk, management things. Understand risks. Understand what can go wrong. Then understand how much money you should put into security, or how much time do you have without putting money into security?
The Different Groups Acquiring the Security Engineering Function
Costlow: We heard with DevOps, DevSecOps, DevFinOps, the acronyms that they shoved into the middle word, Sec, Fin, and Biz. What are the different groups that are now acquiring the security engineering function, or what does the overall org look like?
Tehrani: The shift that's happened in the security landscape is the first line of defense. That first line of defense is no longer like your security analysts and your risk analysts. It's shifted to the software developers. They're the first line of defense in terms of where the attacks are coming from, which is from the web applications. Your SQL injections, your cross-site scripting, your cross-site request forgery. These are all coming from your web apps. This is why your development teams, if they don't create secure code, then you're going to get a lot of vulnerabilities that you need to identify. Then by the time the teams scan and identify and remediate these vulnerabilities, these attackers are already holding you hostage. When it comes to the types of positions, you're going to still hire senior software engineers, or software engineers, or application security leads, or chief security architects, or AppSec architects. The titles are endless. If they're going to hire a software engineer, they want them to know at least some type of secure coding best practices, when it comes to like a programming language that their expertise is in.
In terms of the landscape, we're getting a lot of web app attacks. These web app attacks has shifted the landscape to a more first line of defense being our software developers. From a tooling perspective, you're also seeing like, how do we train our software developers to do secure coding in runtime, or in real time? They're going to get a lab in the interview process and say, what's wrong with this code? The code can look completely fine, but it's a cross-site script. They're going to have to be able to identify that type of vulnerability within the code, through the lab. Then also present on it. What's wrong with this code?
How do you remediate this code, or how would you defend against this code? Or, where in your code is it vulnerable? What did you miss? I'm seeing a lot of that in the interview process.
In terms of particular domains in security, I would say web application security is becoming very important. We're not seeing any more application security engineer, I'm seeing web application security engineer. I'm seeing product security, particularly in the chief levels, like chief product security officer. If you had to distinguish between AppSec and product security, your product security details has your applications, your infrastructure, and your operations, whereas applications is more focused on securing your apps. Other than that, because of this shift, if you're going to go for a software engineering role, chances are there's going to be a question, or there's going to be some lab test or presentation about, how do you secure this code? Or, can you review this code? Or, can you look at this model and tell us where the threat is? These are the types of things that I'm seeing that are taking place now.
The Role of Developers in Security
Costlow: Why is so much of the conversation about freeing the developers up from the responsibility of security, which I don't really think you can free them from security because they are responsible for it? Do you want to talk a little bit about the role of developers and security, especially since you're making a code analyzer that analyzes the code that the developers write?
Gibler: There has been, I think, a significant shift in the security industry, or at least a number of modern security teams. Traditionally, tools have focused on, how do we find all the vulnerabilities? How do we find all the security things that can be exploited? I think a lot of companies, like you see Google mentioned this in their recent SRE book, Facebook had blog posts about this, as did Microsoft, Netflix, Slack, Dropbox, like all these companies are saying, "We tried to find all the bugs. It just fundamentally didn't work. What actually worked was building secure by default libraries and services and frameworks, and things that if developers can just use them out of the box, they're secure, at least against a number of classes of attacks." For example, like web frameworks that are outputting code by default, so unless you escape out of the protections of the framework, you don't have cross-site scripting. That's just a class of thing you don't have to deal with anymore.
I do think that modern security teams are trying to free developers of the responsibility of security, because if you think about it, those skills are different. If you're a security professional, you probably spent years learning about the intricacies of cryptography or output encoding, and all the different contexts, or, what is XXE or SSRF? I think many security teams are viewing their engineering counterparts as their customers. How can we provide a paved road such that if you do things in a standard way, there's lots of security edge cases you just don't need to care about anymore? Which I think is fair, because developers are very good at producing new features that is providing business value. That's scalable, fault tolerant, and all these things that I think us in security are probably not as good at as developers are. How can we enable them to do their job faster, better, easier in a self-service way? Of course, you can't make developers not have to care about security at all, there always needs to be a little bit. I think fundamentally, the goal here is, how can we enable developers to ship better, higher quality code, faster, and ideally, not be blocked on the security team where possible?
Voitova: The developers are not the only one who are responsible for security, because security doesn't start on secure coding. It starts two or three steps before. It starts on risk management, threat management, defining architecture, defining what security controls we need to put into our system, and only then implementing these things. Developers, they often don't have this overview on the whole system on risk, threat, and regulations, so they don't have enough data to do the decisions. That's why we're happy when developers are good in secure coding, at least, but still, there should be some people to help with the rest.
Kapoor: For me, it's like bridging the gap between the developers and security teams is where we can really add value, because you want the guardrails to be provided. Security teams know about risks and vulnerabilities and compliance and governance of the organization, which is beyond just application security for the particular app that's being developed. I think we need these larger Uber policies that organizations can set for developers to self-serve, depending on what the context of the application is. If it's a financial application, maybe policy A applies to it, versus an online retail application. I think the role is shared responsibility. There is a compliance and governance role that is never going to get out of the SecOps hands, because that's what they do. That's what they're responsible for. When it comes to app level security, I do think developers have a role to have accountability, really, to make sure that the code is secure, but also the organization is going to be compliant at the end.
Costlow: We're attempting to free developers from the burden of having to do all of the security and become experts, but you can't free them from the responsibility and accountability of producing secure applications.
Application Delivery and Security
Kapoor: I've always had difficulty looking at application delivery and application security as two different things. I almost look at delivery as inclusive of security, because in the end, a good delivery of an app means reliability, protection of data, trust with the organization, just all of these factors that in the end lead to revenue and also trust building in the marketplace. App delivery and app security are not mutually exclusive.
The Commonalities at Organizations That Excel at Security
Costlow: Who's responsible for writing and performing security tests and fixing them? In order to answer that, I want to combine it with another question, which is respectively what I in product management parlance refer to as the "Anna Karenina," which follows along that book's lines of all happy families are alike in the same way, all unhappy families are unique in the ways in which they are unhappy. As people produce secure applications or produce insecure applications, what are the things that everybody looks for, either technical controls or organizational controls, that pretty much everyone does alike?
Kapoor: Is your question, what security aspects are alike for all different organizations?
Costlow: When you look at organizations that excel at security and do things well, what are the commonalities that all of them do the same?
Kapoor: I think that having, I call it the three G's, which is guidance, governance, and guardrails. I think those are the three things that are common in organizations that are doing this well. When I talk about guidance, is this ability to provide a framework of sorts to the organization without being overly prescriptive on how and what should be done here. Not to be a bottleneck, and a friction, because there's also friction between the teams when somebody wants to ship the app, but they're waiting for a response from the security team to give them the green light, so guidance. Governance is a key factor, knowing the business the organization is dealing with. What are some of the sensitivities around the data that's been traveling over the web? Or, just knowing the business of things helps know what areas need to be focused on from a security standpoint.
Then the third is guardrails, which is flexibility. Giving options. Giving ideas that are just beyond standard OWASP Top 10, ok, just use this particular policy because it's going to cover your WAF attacks, or this denial of service layer 7 attack, or whatnot. Really giving people choices and letting them pick what is going to be best for them since they're so close to the application. I would say guidance, governance, and guardrails are key aspects of how you can actually automate security in a way that works for the organization to keep it compliant.
Voitova: I would also add ownership here. In companies that excel in security, I often see that no matter what department, developer, security engineers, infrastructure engineers, managers, all people are interested in making good quality applications, which also means secure, reliable, and maintainable. The ownership. They don't just create security bugs and close them as won't fix forever. They care about the things they create. They understand risks. Some things we can postpone. Some things we cannot do, and that's fine, we accept this risk. They move in the same direction, like everyone is moving in the same direction.
Kapoor: Which brings to mind, it's as much of a cultural organization and cultural aspect, as much of it as the technology tool aspect. I think there is an element of what leadership an organization provides, what ethos they carry around this? It's going to be crucial. Even if you have the best tools, and culturally, if people are not aligned, it really doesn't work.
Gibler: I think there is some consistency in the types of security checks companies are doing. Overall, there's this concept of shift left, so how can we check for security issues earlier? Oftentimes, this looks like some lightweight static analysis in IDEs, as well as maybe slightly more in-depth static analysis as new pull requests or merge requests are created. There's container scanning both for like, is this the container we trust as well as outdated vulnerabilities? Maybe integration tests or some lightweight dynamic analysis. Throughout every part of the SDLC, I think there's one or two different types of security analysis that if you look at blog posts by a variety of companies, they basically are running the same types of security checks at the same time. I do think there's a lot of agreement there.
Something that was also mentioned, was this idea of accountability. I think rather than having security teams totally own negative security outcomes, but rather they're sort of like, as a security team, we are going to help you engineering teams build things as securely as possible. Ultimately, you have context and an understanding of business needs. We'll do everything we can, but ultimately, you are owning risk and security, and we'll help. There's, I think, some interesting incentive alignments where developers choose whether to follow a paved road or do their own thing, but they also outcome the security outcome for that. It's like you can do whatever you want, but if you get hacked, if your service gets hacked, because of your choices, this is your fault, not the security teams'. I think there's some nice incentive alignment there.
Kapoor: What you said about incentives, that's really key. The KPIs for security professionals and developers are quite different, in a lot of different organizations on how they're measured. It does beg the question of, does that need to be relooked at, in terms of what developers are responsible for?
Tehrani: It's less about tools and processes, it's more management style, in my opinion. Speaking with CISOs, some that stand out to me, for example, it's how they run, in as like, are you running meetings on a weekly basis for anyone in your company, from executive to [inaudible 00:26:41], to even like anyone that's interested in security? Can they come into these meetings and listen in on a code review? Are they going to be aware about some type of security? There are these cultural things that have a huge impact from a company resiliency when it comes to security. One company that stood out to me was a Fortune 100 semiconductor company here in Toronto. They're one of the best code reviewers in the industry, because they told me, they're like every week on Thursdays at 2:00 to 3 p.m., company-wide, other than just these phishing emails that our security team presents, they also say, "We have a security meeting on 2:00 to 3 p.m., where we're going to talk about how to secure your home WiFi. Or, we're going to go through code review and why code review is important." They're always introducing these topics, not in a pushy way, but they're saying, if you want to walk in, you can go in here and you can just culturally learn about security and become more aware about security. This mindset shift allows your developers to think security first, allows your network engineers to think security without forcing security practices onto them. That's what I'm seeing from my angle.
Tying Together Code Analysis and Firewall
Costlow: We talked about the notion of shifting left a little bit in there in terms of one of the things that organizations that excel at security do is they bring things away from production monitoring, and closer to the development side. Both of you, Rajiv and Clint, specifically, you're on different ends of the spectrum, because code analysis is all the way over on the left when code is written, but something like a firewall in NGINX tends to be on the operational sides of the house. How can those two things feasibly be tied together to improve security overall, even though they are at different sides of the DevOps cycle?
Gibler: I think both are important. There's been a number of studies that have shown that the earlier in the development lifecycle you find vulnerabilities the much less costly it is. If you find it as the code is being written, you can fix it in five minutes. Once it's already merged in with the code, then you need to create a fix, that then again goes through code review. If it's in production, it costs even more and takes longer. I think, certainly, remediating vulnerabilities earlier has a lot of nice cost and agility properties. Ultimately, you're not going to find and fix every bug before it hits production. I think that, certainly, runtime protection is very valuable as well.
I do think there is also some opportunity for collaboration between the two. Let's say you are able to identify in source code or just earlier, these are vulnerabilities. We got a report from bug bounty or something like that, where we know that this is an issue. Then combining that with visibility, observability in terms of like, this is running in production. It's internet facing, so maybe this critical issue needs to be addressed faster versus something that technically, this does have a vulnerability, but it's some internal app that our data analytics team uses. I think there's value in both. They are useful in different ways. I think connecting the two is very valuable. Knowing where a vulnerability lives in production, is it in one service that's externally facing, is it in this microservice that you have 1000 instances up so your attack surface there is huge? It's maybe connected to production data or something that's like customer PII, so I think both are valuable in different ways.
Kapoor: There's flexibility, if you shift security left. It's like, you enter a hotel room, there's a lock, and then you have the key to the lock. Then you have many rooms in the hotel. Then you have keys to each of the rooms, in the hotel to open the locks. It's like the microservices environment that you're describing, where you can actually have flexibility of deploying security closer to the app, closer to a certain particular environment. It just gives more flexibility for the organization to deploy things where they need to be deployed, and also for tracking and tracing purposes and telemetry and reporting purposes in order to see things from slicing and dicing information based on what's relevant for them.
Voitova: Also, continuing this analogy with the hotel, again, step back to the design phase. You can design your hotel to have a master key. Suddenly, we have a master card that can open all the doors, and our threats and risks and attack surface is different. Because now we should take care of insiders that will use this card.
Effective Ways of Training Developers in Secure Coding and Design
Costlow: From your experiences, what are some effective way of training developers in secure coding and design? Because oftentimes, vulnerabilities are mentioned, but they're still observed after the training. Andre, you've talked about the semiconductor company in Toronto? Have they dealt with training in that way?
Tehrani: Yes. They have really good reverse engineers in the firmware space. Object oriented programmers are their bread and butter. Object oriented programming, I find those are better secure code reviews, because that language is very tough. First, as a developer, agility and security don't go well together. In this world, right now, in software development, it's all about rapid development and rapid deployment. The security team is always the stop sign. That just pisses you off. I get it. What you want to do as a developer, is first, you want to embrace security, number one. You want to say, ok, I want to embrace security, because I want to create more secure code, cleaner code, because I don't want to keep going back and forth. You just want to be a better programmer or developer. You want to embrace the security from that angle.
In terms of training, there's different paths you can take. You can take certifications. Some are from $200, like in the CompTIA space, or you can go to SANS where there are a couple of $1,000 where you can get secure software programming in Java, Python, .NET, whatever one floats your interest. There's those certifications where they offer you a theory and a lab component. The other way too, what many people don't know is you can volunteer in your own internal company. If you really want to learn secure coding, and you have an application security engineer unit, or team, or just one solo person, and they keep getting your reports, you can sit beside. You can go to that person and say, "I work 40, 50 hours per week. I'm willing to work a little overtime unpaid, just so I can learn this static analysis tool or this dynamic analysis tool. I can learn about how you secure code, or I'd like to sit on your whiteboarding sessions and tell me, where is my code insecure?"
There's all the tools out there too. There's one tool that I'm familiar with called Codebashing. It's like an eLearning platform where it says right there on the spot where your code is vulnerable. You can go and get these eLearning platforms. If you really want to pivot into security, I would recommend a SANS certification because you have that lab and theory attached to it, and it holds a lot of weight in the security industry because of that. You're going to learn that way. If you don't have the resources or your company is not going to sponsor you to get these certifications, go and volunteer and sit with your security team, and like, teach me secure coding, or where am I vulnerable? You in turn teach them how to develop software. Communicate your rapid software development or your agile framework, your Scrum, to your security so they also understand your shoes as well.
I think that's the best way is, first, you want to put your feet in the water and figure out, is this something you want to do? Is this something that you're interested in. Go and volunteer an extra five hours with your security team, particularly your AppSec unit or your QA, and figure out how to do code reviews, secure coding, and they will knowledge transfer that over to you. That's step number one. If it's something that you're really passionate about after that, then go after a certification and ask your company, if I get the certification, will you sponsor me? Or, if I get the certification, will you add me to the AppSec team? That's the best way I think that you can train yourself.
Because sometimes you have a really good security team, and you're not using that as your own career development when you can just get that knowledge transfer. That's what's happening at a board level too, just so you know, because these senior executives, they're now being more responsible to become more security aware. They're hiring the best CISO they possibly can. They're meeting with them, maybe two or five hours per month or more, and that CISO is just giving them the knowledge transfer. They didn't go to school to study it. Then, sometimes, they even hire one CISO, full time, but they have three or four other CISOs to fact check their CISO. Then, collectively, that whole knowledge transfer that goes on, they become more security aware, or they become experts in the field with them, depending on the type of knowledge that they gather.
Kapoor: We've heard of collaboration in terms of information worker tools, but we don't really talk about collaboration when it comes to enterprise DevOps tools. We don't really use the word collaboration. I think that really needs to come front and center here, because we are clearly getting to this conclusion that this is a shared responsibility, and it requires visibility across the teams, because each developer needs to look at, how is my app doing? I just want to know, I just released this application. Can I get telemetry back from this application? Can I get the information so I can learn how I'm coding, how I might be doing things that might need to change or whatnot. There is a continuous learning model with visibility, which is so key for developers to understand the impact and the consequences of doing or not doing something.
The Responsibility of Writing and Performing Security Tests in the Code
Tehrani: Who should be responsible for writing and performing security tests in the code? Who should be responsible for fixing those?
Because of this influx of web application attacks, the attention is being focused on the application security pillar for CISOs. In terms of who decides who should run these security tests, that's getting into the hands of the BISO, the Business Information Security Officer. Who is fully accountable, it's a shared responsibility model, but who decides it, the BISO will give the information to the CISO. The CISO is acting more as a validation point for the QA, software developers, and AppSec teams. Accountability-wise, it falls into the CISO's lap, because he/she's the one that has to report to the board to say, yes, I'm ensuring that our software and applications are secure.
In terms of the program, that's falling more into the hands of the BISO, what I call the Business Information Security Officer. Everyone is running software internally and externally, most companies are software companies today. Site Reliability Engineering is also another department that's becoming heavily involved in this. It's all about figuring out from a business point of view, how do we get our tests, how do we get our program running smoothly? The CISO is just acting as a validation point. Because just insider threats, there's social engineering, there's a lot of different attacks within this particular domain, they're just a validation point for you. It's your BISO and the different BISOs that run these independent programs.
Tactical Ideas for Secure Coding Training
Gibler: Some tactical ideas for the secure coding training. One thing, Segment, Slack, and others have found is rather than a generic coding training platform, which is like, this is what SQL injection looks like, or cross-site scripting or so forth, that can feel a bit abstract and not very exciting to developers. However, if you use specific code snippets found from real bug bounty or pen tests, in your own applications, you can then show that to developers, and to say, this is what it looks like in our code in our programming language and web framework. It feels more real and actionable.
For example, Leif Dreizler of Segment, gave an excellent talk at AppSec California, where he said, yes, we basically took a bunch of previous bug bounty submissions. We found the code that was actually vulnerable. We created our own internal CTF that new employees went through, where basically, they spun up an internally vulnerable mini-version of Segment using code snippets from bug bounty submissions. Basically, they had a leaderboard where people who found all the different vulnerabilities were at the top of it, and I think the founders got involved. It was just really a way to make new employees excited and conscious of security in a way that it was very immediate and actionable and practical, rather than a general high level training that may or may not feel like it actually matters to you. It's like, yes, this matters because it happened to us. Here's concrete examples. That's one way a number of companies have made it seem more exciting, both having a CTF, so it's hands-on breaking, as well as the code examples are from your code. It's a little bit more time intensive to do that, but I think it is much more exciting and engaging.
Key Tactical Takeaways for Developers
Costlow: I like the notion of the tactical takeaways, especially in terms of just getting information that's relevant to the organization. What are some of the other key takeaways for developers who are looking to secure their applications across the entire DevOps lifecycle? What are the best tactical things that you can do very quickly?
Voitova: Think first, implement second.
Kapoor: There is no one-size fits-all approach. The best approach includes multiple security layers. The key to ensuring each security solution effectively fits into your pipeline is really key, so really thinking of each application, each situation as its own thing, and really thinking about how a particular product or tool and/or modality might be helpful, is key.
Tehrani: Whatever team your security team is asking you to deploy or integrate within your pipeline, whatever tool that's coming your way on your desk, get into a war mindset. War train these tools. It doesn't matter what tool comes your way, just be prepared to whatever tool you have, you're going to war with these tools. That's really what the CISO and the security teams are looking to do in the next year and two, is to get you ready, when these web application attack comes in, whatever tool you have, you're going to be able to go to war with it.
Gibler: Bobby said, the same vulnerabilities are found after training. One tactical idea is, step one, build a vulnerability management program where you are tracking over time, these are the vulnerabilities we found. This is the vulnerability class. This is where it was introduced in code. How severe was it? Basically, what are the important relevant things about these? Then over a couple of quarters, you have this history of vulnerabilities you can review. Group them into buckets by vulnerability class and severity, so you're like, we had 10 XSS, 5 SQL injection, and 10 XXE. Then find the vulnerabilities that are most prevalent or highest impact and then look at the code, or basically figure out, why did these happen in the first place? Is there a consistent reason? If so, build a record library or other abstractions such that if someone uses that it can't happen in the first place. Then create some internal coding guidelines like, "Everybody, when you're doing this, this is how to do it." Then do some lightweight checks to make sure that in all new code, you do it that way. That is an effective way to eliminate classes of vulnerabilities rather than do bug whack-a-mole. That's the idea.
Costlow: I hate bug whack-a-mole. It's just a big waste of time for everybody.
See more presentations with transcripts