Transcript
Bango: The reason I did this talk is I really wanted to do it in a developer-centric environment because I wanted to speak with folks who I've had this really strong relationship and bond with for almost 30 years. It's become such an important topic. I think, in many cases, in the security field, we don't really look at it. We take it for granted that a lot of the things that we do are just going to be secure. As developers, we have a lot of pressures, and those pressures are coming from management, they're coming from product teams to get the product out. Sometimes, things fall by the wayside. I chose to start trying to address that, in some fashion. This is how I did it, and some of the challenges that I went through, some of the hurdles that I had to face as somebody who didn't have formal security experience.
Also, I hope this gives you a foundation for maybe getting into security yourself in some fashion, whether it's just part of your development lifecycle, or whether it's something that you want to switch to as a full-on career. There are a lot of opportunities there.
My name is Rey Bango, and that's my Twitter handle - fairly unique, you can tell. I work for Microsoft. When I joined Microsoft, I came from the web developer space. I went in there with all the bright-eyed and bushy-tailed expectations of cross-browser development and making sure developers were all happy and "Hey, Kumbaya," and all that stuff. I ended up supporting the best browser in the whole entire world - Internet Explorer, it's awesome. Literally, I was the Internet Explorer evangelist. I don't even want to call it advocate anymore. I was the evangelist because I had to get IE back into some semblance of love, and it was hard. This was really hard, because I know all of you loved coding for Internet Explorer.
It was a slog, it was really challenging. I had to go and speak with web developers who were very much into open-source, cross-browser development. Some of them were very challenged by the things that Internet Explorer had, some of the quirks. It was rightfully futzed. The developer tooling was challenging, the rendering engine, tried it, God bless it, it was good at the time. Then it just wasn't modern. I was trying to find a video of me at one of the events, and I really did find one. This was actually me walking out of one of the web developer events. They set me on fire, it was a rough position. Think the things that I had to endure. Think about the challenges that I had to face, not only in the web developer side, but also on the infrastructure side, working with enterprise customers, thinking about all the different things that you as professionals have to endure, including security.
Of course, I had to deal with the running jokes. Chrome and Firefox were fantastic browsers; they were modern, they were fast. The rendering engines were amazing. That's why developers really started moving down in that path and I get it. People want the new shiny, but they also want modern and they want fast. They want features, and these browsers were offering those features. I think we lagged quite a bit behind.
Then we said, "Let's try to figure this out," and we switched to Microsoft Edge. It really was way better, but there were still some quirks. Now, thankfully, the team - and I'm no longer on that team - has embraced the Chromium engine. I'm very proud of that team for embracing the Chromium engine. There are a lot of iterations going on. I have the Chromium version of Edge here and I'm really proud of it. I think it's a great browser, it's been solid, it's been consistent. I'm proud of that.
Security
After doing that for so long - maybe eight years - I really started wondering, what's the next step for me? What are the things that I want to talk with developers about? It's great to talk about cross-browser development, but after a while that becomes repetitive. I started looking for a new narrative, something that I felt was important, and something that developers would hopefully relate to. That was security. I really felt that there was a need to start talking about how our applications were built from a security perspective.
There were a lot of things going on in the news about vulnerabilities in web apps, whether it's PHP-based, whether it's client-side apps, whatever it might be. There were so many news articles coming out that I felt that we weren't doing our job to properly secure what they call the new endpoints. Web apps, for all intents, are the new endpoints. This is what attackers are looking at. During my research and transitioning, it's been known now that cloud infrastructures are getting much more robust and secure. If you implement something out in the cloud and you do it the right way, generally, it's going to be solid; less moving parts to deal with, you let the infrastructure provider handle a lot of security. You do have to do some things on your end, but for the most part, once you get it down, it's pretty solid.
What does that mean? Attackers have to look for new endpoints, whether it's your APIs, whether it's the fact that you didn't sanitize input properly, or maybe it's cross-site scripting, whatever it might be, but they're looking at web applications. I remember the first time I saw a web shell appear within an application. I freaked, "How do they do this?" If you're not familiar with a web shell, think of it like when you open terminal and you can type in commands. Think about doing that within an application - your web application. That freaked me out, I didn't know that was even possible.
That's how now they're trying to tackle security. That's how these criminals are trying to break into your systems. A lot of times, they're only compromising it, maybe to install crypto mining. Sometimes they're doing it for different reasons. You might get somebody who's a nation-state, if you work for a financial institution or government agency. Nation-states are really interested in all the facets of your web applications. They want to know, because that data is really valuable. Has anybody suffered identity theft? I have, it is horrible.
This is why it's so important to think about the security of your applications, because the impact is greater than you. The impact could be human life. Somebody breaks into your app and manages to steal PII. That's a big deal. That could mean that somebody's life now is ruined because that data is out on dark web markets and it's being used to get everything from loans to just stealing bank accounts. So it's really important and imperative that we start thinking about security.
When I decided to transition, I decided to go to the conference that had the most black hat hackers in the whole world. That's how I wanted to learn about security; throw myself into the fire. Isn't that great? I was really scared to go to this because everything you hear about DEF CON, is, "You’ve got these criminals, and they're there, and they're going to hack you if you open your phone - oh, my God." No, seriously, they recommend you take a burner phone and take a burner laptop, and then take the laptop when you're done, throw it on the floor, set it on fire. I'm, "Oh, God."
I did manage to go and it was fun. I tweeted this out as I was getting myself ready, and I really just thought about security from, "Maybe I can do something from a hobby" perspective; get familiar with it, and then try to incorporate it into my work, especially in the application security side. I've never thought about security from a professional perspective.
Then WannaCry hit. It’s ransomware that basically encrypts all the files in your PC. Normally, if you're dealing with financial institutions, you get hit with a ransomware where you personally get hit with a ransomware, "All right, big deal." All right, your computer's locked, it's maybe bad, because maybe you have personal pictures. At the most, it may be a really horrible inconvenience. If you're a financial institution, your money is safe, it's insured. If Chase gets locked, all right, they're insured. You're not going to lose your money.
What about human life? This is what impacted me. This is what really was the catalyst for me to start thinking about this as a professional. WannaCry locked the computers of hospitals, and patients were being turned away for dialysis treatment, and chemotherapy, life-saving treatments that they needed. From a personal perspective, this hit me hard because I thought, what if it's my family? What if it's one of my children? What would I do? Think about that helplessness that you would feel if this was locked and the doctor says, "Look, I can't treat your child today," and that child might need life-saving treatment, and you had to walk away. How would you feel? That's how I felt.
AppSec is Hard
I really started digging into the security space. The one thing I realized is that, "Yes, application security is really hard." It is really hard because we have a lot of moving parts. We have a lot of libraries that we use, we have a lot of frameworks, we have a lot of infrastructure. We have to integrate all this into a cohesive thing, and then it's the human factor. We're all writing code, and we're all infallible. I don't know if any developer, except for myself, writes 100% pure, awesome code. Ok, bad joke.
Seriously, we don't write perfect code. Invariably, we're going to make mistakes, and that's the hard part about it. They say in the cybersecurity world that defenders always have to get it right 100% of time. Criminals and hackers only need one time, that's all they need. That's hard, think about that. How do we do that, especially when we're trying to work with a security organization that basically thinks this is what we do? This is us, the developers. We just lounge around, getting served stuff, we're party animals. That's what the security community thinks we are.
It's not like that. They don't understand that we have all these bits of tooling that we need to incorporate into our application to make it successful. Whether it's the framework to reuse, the CI that we have to implement, all these moving parts are critical to success of modern development workflows. They just don't get it. Something as simple as saying, "I need to install this," gives you the look from the security people. When you go in there and say, "Listen, I need admin access on this spot, I need to install all these things," what happens?
Tell me you haven't been zapped just like this. It's the God honest truth that security people have this fear of application developers, because we have to have so much permission to do things that security people don't want. We need to have that access, they want to limit the access to limit their exposure. So who's right and who's wrong? It's neither. Both are doing a job. How do you meet in the middle to actually be successful across the board?
I love this right here, though, because to some extent, I understand what security people are talking about. Developers can be lazy, I'm guilty of that. The number of times security researchers have found passwords or API keys in GitHub is incredible. There are people who literally will go out to GitHub and do a complete scan of GitHub, on repos, to find API keys. If you didn't know that, now you know. You should log into your GitHub repo and go back, and start looking through your code, and say, "Did I leave a password or an API key in there that I shouldn't have?" Seriously, it's really rough.
This is why security people get freaked out by developers. We sometimes do these things. On top of that, you have things like the Npm attacks that have been happening recently. These are called supply chain attacks, basically. This is actually really clever, but it's been predicted for years. The thing that's happening on a regular basis is that somebody is coming out and creating a really well-liked package, something that people will use on a regular basis.
How many of you have just done npm install package-name without verifying dependencies? I've done it. All of us have put ourselves at risk because we did that, we didn't verify dependencies. We're making an assumption that all that is great. I think that's something that in the open-source world is very common. This is why this happens. Because we are to some extent so trusting, it allows us to look at a package name and say, "That looks pretty good. I'm going to include it in my stuff." But little do we know that somebody could go in there and say, "I'm version 3, I'm going to add that little crypto-mining script," or, "I'm going to add this script that's going to steal all the stuff from your Coinbase account."
That's exactly what happened here; somebody did a supply chain attack. This was a dependency on some other packages. Thankfully, the Npm team caught this. They're actually doing a lot of security auditing. Adam Baldwin, who ran the Node Security project, his company was acquired by Npm, and now they have a very dedicated team of talented professionals. They're auditing codebases, especially on regular and popular dependencies.
Then this is what I was saying, basically - that this has been known for a while. The problem is that not only has it been known for a while, there's been a lot of pushback around this. When people push back and say, "We have to be careful about this," it's not that people are trying to stop progress. They're just asking us to think and be thoughtful about the way that we're building things so that we don't get burned down the road. I have seen this type of stuff, not in the security space, but in other aspects of web development, where if you push back on anything because you just feel like, "Maybe we should take a couple of minutes to really think it through," there's always that comment about, "No. Progress. We need to move forward. Why are you being that guy who's blocking us?" It shouldn't be that way. We should be a little bit more thoughtful in the way that we approach things.
I did a search on Npm dependency health. I got this slide, which is great. Look at this, this is real. I'm going to show you why it's real. I said, "I wonder if this is an actual real repo or somebody just faked it." I actually went to the GitHub repo and I went to the dependency graph, and I started looking at this for this project. I'm like, "Oh, ok," and keep going, and keep going. Oh, wait, there are 63 more. The key thing to remember is that each one of these is a dependency for this project. Each one of these has an opportunity to be an attack vector. I know Substack, reputable person, but let's say Substack adds a collaborator, and that collaborator gets trusted privileges to push code and inject something in there that nobody else in the project sees, because a lot of these are one-person projects. What happens then? They do an update on their app. That dependency automatically gets injected. Now all their customers are at risk.
I don't want to pick on them, because this is not just them. Here's a project I worked on called PWABuilder. Let's look at this. Boom, there's Mocha, there's Chai. Oh, wait, there are 39 more. This is a project that I worked on, I'm not picking on anybody. This is an example of a very common type of dependency graph on most modern projects. Even something as simple as a simple project I worked on would have this.
One of the things that I love about GitHub is that they've become a little bit more proactive in helping you identify security vulnerabilities. If you didn't know this, you should really sign up for their security notifications for your projects. This is a silly project that I was working on called Pinteresting, when I was trying to learn Ruby on Rails, and I haven't updated it in years. The great thing is, I still get security alerts for it. Nobody's using it, so I'm not worried about it, but it was a great example of how GitHub is being proactive in giving you alerts about security vulnerabilities.
You go through here, and you can see what needs to be updated. For example, Actionpack has a high severity bug. If I drill into it, it'll tell you what the remediation is, which is great. That's number one. I think anything you use should help you remediate the issues. Then, it tells you about what the CVEs are, the actual vulnerability reports. It tells you what the problem is. I think the last thing you want is for something to be able to run a remote code execution - kind of a bad thing. So it's important that all of us start thinking about how these services can help to complement your workflow. Listen, security is tough, and I'm not expecting everybody in this room to all of a sudden become a security expert. Try to supplement the work that you do by leveraging the work of others who are security experts. We'll talk a little bit more about some services that can help you with that as well.
Stats
I spoke to a company called Snyk, and they gave me some really good stats. This was actually really eye-opening. They asked a bunch of people, especially OS maintainers, if they audit their code and the cadence of it. 26% of them said, "We didn't." Think about this – 26% of open-source maintainers said they don't audit their code. This is the code that you are likely actually looking at. 21% has said at least once a month, which is good, but 10% of them said every couple of years. Every couple of years means that, in the interim, while you're getting these releases, you're just installing whatever is there. You don't know what's being installed.
How many of you have looked at every dependency and audited every piece of code from those dependencies? That's what I thought. We don't, because we have too many things on our plate. We have time constraints, we have features, we can't do this. If you're not doing it, think about the open-source maintainer that is that one person that's out there trying to do this. How are they going to go ahead and audit thousands and thousands of lines of code? You hope that maybe they're doing it from the very beginning and think about it, but many don't.
The problem is that we have implicit trust. We assume that everything that we install is safe. We love open-source, "Clearly that developer should know what he's doing and making things safe." No. A lot of these developers are really good at building JavaScript and Ruby on Rails, and a lot of them are really poor on security, so we have to think about how that plays into it. If you look here, how do you find about vulnerabilities? This is more, I think, a generalized developer question. 27% said, "I probably won't find vulnerabilities." That's peachy, that's really reassuring. 36% said, "We use a dependency management scanning tool that notifies us." At least there are some people who are thinking about that and I think that number is expanding. I think there's becoming more awareness around it. A lot of these vendors are actually going out there and being proactive about educating the community about the things that we need to think about.
Dark Reading had a great article about web apps. It said that, on average, each web application that this company tested contained 33 vulnerabilities and 67% of the apps contained critical vulnerabilities such as insufficient authorization errors, arbitrary file upload, path traversal, and SQL injection. The fact that we're still dealing with SQL injections at this time floors me. Does anybody know how to solve SQL injections? What's the easiest way to solve SQL injections?
Participant 1: Sanitize inputs.
Bango: That means that somewhere down the line, somebody's not sanitizing an input. Why is that still a thing? Even the frameworks offer that. Here's another one from VeraCode, they pulled 400 developers. They found that just 52% update the components when a new vulnerability is announced. Think about that. They don't update their components. That's 48% don't update when a new vulnerability comes out. That's scary. They know it, but they're not doing it. We do have a problem.
Web Apps & APIs are the New Attack Endpoints
Web apps and APIs are the new attack endpoints. This is a real thing. I've been speaking to a lot of security people over the last two years that I've worked to transition over. Kevin Johnson wrote the web application pen-testing course for SANS Institute. I was having a conversation with him, he says, "The majority of penetration testing engagements I get right now are for application security." It's because cloud infrastructures are pretty solid. Networks themselves, the people are getting really good. Endpoint detection systems are getting way better. With AI and machine learning, things are getting way better.
It's not always going to be fail-safe. I don't say that, but now they're shifting to a different location. They're saying, "How do we go ahead and compromise these systems through web apps?" because it's the human factor, humans are fallible. They're going to make mistakes, that's how that happens, that's how they target it. That's why I would suggest that everybody in this room take a moment and just go look at the OWASP Top 10. It's the top 10 list that OWASP has determined of common web application vulnerabilities. I'm going to call it application vulnerabilities, I don't even want to say web app.
Do you know what the number one vulnerability is on this latest release? SQL injection. Cross-site scripting is still on there, cross-site request forgery is still on there - all these things we can mitigate as developers. I think a part of that is that we were not really sure how to do it, in many cases. Some of the things are hard. I would urge you to take a moment and go to OWASP, and use that as a reference to get better at securing your applications.
The other thing that's becoming really important right now is, start thinking differently about the way that you build your applications. In traditional methods, we gather requirements, we do the design, we code, we test, and we release. Then, maybe we'll think about security around the testing phase, "Yes, we'll test it out. We'll get somebody to manually do input testing stuff like that." That's great. That really doesn't do much. You might catch a couple of things, but what you want to do is shift left.
Tanya Janca is my teammate, she's awesome. She's one that you should follow as well. She's really strong on application security. She's into DevOps. I would say follow her, and I'll give you her information later on. This was her slide, and I took it from her, I pilfered it - I social engineered it. Basically, we should be thinking about security from the requirements stage. We should be shifting left, and thinking about security from the beginning, and moving forward. As a new feature comes out, we should be thinking about how security comes in. The problem is that we have a disconnect between the way that we as developers work, and how security professionals think. We don't really talk the same language in many cases, and that's hard. The communication has always been a challenge between these two, and I call them complementary fields.
Security Champions
The biggest thing that's happening right now is the notion of security champions. This is where it could be a good opportunity for you to start shifting into security, for you to go to your organization and say, "Listen, I'm really interested in security. I want to make our app secure. I'd like to be a secure champion and be able to work with the security folks, and have that good dialogue, and that feedback chain going across the board." You would be the person who’s responsible for having those communications at the requirements stage, and ongoing conversations about how security can be best implemented into your applications.
How do you enable developers to have a really strong workflow, while still letting it be secure? This is where a security champion comes in, because you understand that side. On the security side, you also have a security champion; that person who's going to be that advocate, not only for the security side, but also will be the advocate for the developers, because that person is going to take the time to understand your needs as developers. This is a really cool way of thinking. I urge you to go to your management and spend time talking about that. How do you identify two people? Just start with two people that can be those advocates, that could say, "We want to build secure systems and we want to come up with a good method to helping everybody be happy."
Jerry Bell said it great, "I find that developers often lack the perspective on the adversarial mindset." This is really important, because we don't have that adversarial mindset. Our job is to build features and to build really compelling apps. We need to have this, we need to start thinking about what would a malicious actor do with this feature? Start drilling into it, whether you're using something like Burp Suite or Fiddler to tinker around, and add data, and see what's happening, or interpreting how the requests are coming across the wire to see can they be pilfered in some way?
Tools for Learning about Security
We need to start thinking a little bit more like that adversary so that we can identify the vulnerabilities. That's why I wanted to go into the security space. I actually enjoy that little adversarial side. My hacker handle is a little bit evil - I have that little evil bit in me. It is kind of fun to break things. It's also important to fix things. I thought this was going to be me, I really was going to put on the mask. I started looking for security training, so I can get up to speed. What I do? I go to YouTube, of course. What happens on YouTube? You just get flooded with stuff. Where do you find stuff? It's impossible. There's just so much, there's a glut of information.
I said, "All right, I'm going to start off with this one, The Complete Ethical Hacking Course." That was ok, but I think it depends on the type of person that you are. I'm the type of person who actually enjoys going into a classroom, sitting down, being able to talk with somebody, getting that mentorship aspect to it. Some people enjoy online training. That's actually great, I took an online training class that’s turned out really good for me as well. But I do enjoy being able to sit down and talk with people, and also have that mentor I can bounce questions off of.
I ended up going here to Hacker House. That wasn't online, that was an in-person class that gave me the foundations of how bad actors actually do their things. It's incredible the amount of tooling out there. It makes it trivial for anybody to pick up and have some hacking tools, and actually go at stuff. It doesn't mean they're doing great. It doesn't mean they're going to be quiet, they’re probably going to be very noisy. But the fact that these tools are so readily available, and they can make your life impossible, it's scary. I got that foundation because that gave me that adversarial mindset. It showed me, "This is what I can do." I can use this tool to go ahead and probe databases, a tool like Sqlmap, a great tool. It looks into MySQL databases, and will actually return your databases, your data tables, users, data inside of it. That's just one tool. The point is, it gave me that mindset.
Then I carried on with my education with eLearnSecurity which gave me a little bit more advanced. This was online. I just came back from a course with SANS Institute. This fried my brain, because if buffer overflows are hard as it is, because you're dealing with assembly code, and machine code, and memory addresses. By the third or fourth day, I was done, I was fried, I wanted to go home. I curled up into a ball in the room and I was, "No. Please." It was horrible, but it was a great course and gave me perspective on what's capable in terms of bypassing operating system defenses.
I'm not saying that you have to do all that, but the very least, taking an ethical hacking course and getting familiar with the tools that give you application security coverage will be really beneficial. For example, there's a course on YouTube, it's a free one, on how to use Burp Suite. It's done by Sunny Wear, she is an expert on application testing. I would tell you, look it up and follow her course because she actually walks you through how to use Burp Suite to do penetration testing on your app. It's really powerful. That's the type of stuff that all of us can do. You all can download the community version of Burp Suite and poke around in your app to see if you find stuff. It's really helpful.
I also have a ton of books. The one that I would recommend is called, "The Web Application Hacker's Handbook." This is widely considered the Bible of penetration testing for web applications. Some of the concepts may be a little dated, but a lot of them are not. I would tell you, "Go buy that book." If there's one book that you can buy, as web developers, get that one, and walk through it, and you're going to be freaked out. The other one you can get is "The Tangled Web." That's another really good one, but this is the one that everybody recommends.
It's more than just reading books, it's more than taking a class. You have to practice, you have to go out there and actually try this stuff out. I would say look at OWASP Juice Shop. This is great because it's a Node.js Application, uses Express, uses Angular, and it's purposely vulnerable. It allows you to download as a Docker image. You can install it on a VM, and you can hit it. You can actually tinker with it to get familiar with things like cross-site scripting, cross-site request forgeries, SQL injection. This is the type of stuff that allows you to practice in a safe environment, where you won't get in trouble.
This is the one thing I'm going to say. This is my disclaimer to all of you. If you decide that you want to go down the hacking route, please do it in a safe environment. Do it in your own virtual lab. Go to something like Hack The Box, or hack.me. Do not hack production systems, please. You will go to jail. Juice Shop is one of those opportunities for you to be able to hack something in a safe environment, where you can load it into a virtual machine and go at it, and actually get to learn how some of these attacks happened, and how to mitigate them.
The other one is Damn Vulnerable Web Application. This has been around for a long time, it's another great one. What I love about it is that it gives you increasing levels of difficulty. You start off with the really foundational stuff, then you flip a bit and it keeps getting harder and harder. As you continue on, you can learn different ways of hacking the site. Then, of course, Tanya Janca, my teammate, has something called the OWASP DevSlop project, which is in a similar vein, but more for people who are focused on DevOps. Definitely something to consider if you're getting into DevOps, to understand more how to protect systems.
Automate Security
Ultimately, we have a lot of work on our plate. Security may not be what we're going to do. Maybe it's going to be something that you're going to get a little bit familiar with, but not your forte. Totally fine, that's acceptable, this is where automation comes in. This is where it's critically important, start thinking about how can you include services that help you solve some complex problems, especially around security. If you're not a security expert, maybe you should be looking at leveraging other people who are. This is where companies like WhiteSource, like Snyk, like VeraCode, and Black Duck come in. I was talking to the WhiteSource folks and they gave me some really great data. Let's read through this. Over 75% of open-source projects were aware to only 50% of their open-source inventory. That's scary, that's inventory management. If you're a company that has to focus on compliance issues, especially around IP, that's really scary.
I know at Microsoft we have very strict policies about open-source because we have to be careful about lawsuits. Somebody creates an open-source project that has leveraged code from another project, and that's not property license, they can come back to us. As an organization, if you just embrace an open-source project and you're not aware of those licensing concerns, that's a big issue. 90% have at least one vulnerability; over 45% have five and up. There's at least one license that doesn't meet company policy, on average. How many of you in this room would be able to tell if the open-source project that you're using would meet licensing requirements across the board, taking into consideration all the dependencies that these open-source projects have? Four people. Let's say there's 100 people in this room, so four people out of hundred can feel confident that they know that they have awareness of the licenses for the open-source projects they use. Think about the issues that that could play for your project down the road. When you embrace a vendor, yes, you're going to pay out of pocket for it. This is the type of stuff that you want to pay for. This is the type of stuff that's important, because it protects you not only from a security perspective, but from a compliance perspective.
WhiteSource is great, I love their dashboard. I love the fact that you can drill into their dashboard and it gives you a really good view of the vulnerabilities that your dependencies have. This is stuff that's really challenging. Remember what I said: if you don't know security, work with somebody who does. All the vendors will give you some kind of dashboard. You want to have that bird's eye view that says, "This is a vulnerability. I need to dive in there and solve this problem." You want that, and doing it by yourself can sometimes be challenging.
I would urge you to look at any of these vendors. I don't work for any of them, obviously. That's why I put several up here. I think it's important for you all to look at the choices, and there's more than that. There are a lot of different vendors, these are just the four that I knew off the top my head, and I didn't want to have a very cluttered slide. Look at them all, and see which one solves your problem, and which one is the one that you feel really strong with. All of them are good vendors. It's up to you to evaluate which one fits in for you.
Build a Strong Network
The other thing I'm going to urge you is to start building a strong network. What I mean by that is build a strong network of people who understand security. Get to know people, fellow developers, security professionals, who understand the landscape and understand the threats that are coming out. There are people who are proactively monitoring this constantly. You don't have to be monitoring it yourself. I don't think we should, but I think you should make friends with people who are, and ask them to help you stay on top of that.
If you go to your security team and say, "I really want to know when a security vulnerability comes out, because I want to stay safe with our application. I want to build secure software," I can guarantee you, they're going to be so excited. They're probably going to hug you. You're going to have this weird, awkward hug thing. Tanya Janca, my teammate, she is amazing. I would say follow her, get to know her, say hi to her, she's really friendly. She is one of the best application security people I know. Of course, you have me. Hopefully, you guys will reach out to me and ask me questions. I would definitely urge you to do that.
The other part is, look at non-traditional talent to help you out in solving some of these problems. We always look for that person who has the 5 or 10 years of experience, whether in application development, or security, or whatever it might be. What about that person who maybe is a really good project manager who wants to be that intermediary between the two teams? Why can't you leverage that? Does it have to be an application person and a security person talking to each other, or could it be somebody who has this desire to go out there and be that bridge? Look for people who are outside of your normal domain and capitalize on their strength, whether it's communication, project management, leadership, whatever it might be, to be those advocates for you. Most importantly, do the right thing. I love the saying, "Go hack the planet," just please don't hack me.
See more presentations with transcripts