Transcript
Celine Pypaert: We're here to talk about how can we improve security when it comes to open-source dependencies, and how does that tie with risk management? Everyone's favorite topic, very fun. I know it seems a bit contradictory to include risk management in the same sentence as innovation, but we're here to find out how or whether that's possible. If you were to build a house, you wouldn't start with the roof, would you? Neither would we build a house on sand.
If any of us remembers what it was like to be a child, making sandcastles, you know what happens if it gets wet, if it rains, if the wave comes up, it crumbles. What does that have to do with open-source dependencies? What does it have to do with application security? Really, as we all probably know, you need to have a strong foundation first in order to build on top, so to build the walls, the roof, the windows. Historically, as we probably have all experienced or still experience, security can be seen or can be a blocker. Really, it's about, security should be there to provide us with a blueprint on how can we build, how can we innovate. Especially in the face of our increasing threats, how can we have more confidence as we build? How can we have more confidence that we're building in a more secure way without introducing vulnerabilities into production?
I'm Celine. I'm Security Vulnerability Manager at Johnson Matthey. We are a global 200-year-old manufacturing company. Some very old stuff, some very cool new stuff like Quantum. As you can imagine, the threat landscape and the risks that we face every single day from safety physical risks, like with chemicals, all the way to your latest cyber threats, these are all things top of mind for us. I'm here just to speak and hopefully share some best practice and some tips around how we can manage these risks and threats.
Blueprints To Building Open-Source Dependency Security
Together, let's look at the blueprints to building open-source dependency security. Just some three key takeaways, if we could just group them into three different key points. Number one, it's all about, first, we need to know what is in our environments. That's the identification phase, being able to initially identify what are the components, what are the building blocks in our in-house applications, in the applications that we provide and service to our customers. As you probably already experienced, this is a continuously evolving thing. It's something that we're never going to stop. We're going to have to keep doing that. How can we do that in a more efficient way? Ownership and accountability, number two. Or, really, you could probably say that that's maybe the first one, the more important one. It depends on what works in your environments.
As we've probably all experienced, GitHub repositories that are orphaned, that are no longer maintained, certain applications or open-source components that just nobody looks after, or some server that has been sitting there and has been unpatched for three years because the person left the company. I don't know if any of these are familiar scenarios for anyone here, but this is where ownership and helping hold each other accountable is really key in effective threat protection, risk management. Then, lastly but not least, and probably also another important one, is, how can we move from reactive? How can we move from the constant firefighting and spinning different plates on fire and the whack-a-mole game? How can we start to hopefully move away from that and become more proactive, and in that process then help our developers, help our operations team, help our security teams to leverage automation and to be able to focus a little bit more on the proactive fun stuff rather than having to constantly put out fires.
Identify and Prioritize
Problem statements, and this I think we could probably all agree, this could be an endless list, really. As most of you probably know or are well aware, probably even more than I am, open-source software in general has just been increased in usage. It's increasingly a part of our day-to-day for everybody, not just builders and makers, but users as well. All the mobile apps that they use every day, all the websites that we go on, increasingly open source is a key building block of what we use every single day and what we rely on in modern life. It's ubiquitous. It's everywhere. That then means that these are links in a chain, so what happens if a link breaks in a chain, the whole chain breaks. Again, that sandcastle example.
Some of the challenges that we've seen in this space is the lone maintainer, or the fact that when it comes to open source, especially free open source, as most of you probably know, you might just have one or two or three maintainers or a small team, and oftentimes they're doing it as a passion job. They're doing it on the side. They're not necessarily getting paid for it. That is a challenge, and that introduces then security challenges, because on top of it, they don't really have time to make sure that everything is secure the way they build it.
As such, because this is just everywhere, it's a human thing. I initially had a background in psychology before I switched to computer science, but this is a common thing for us as humans is to have this trust. There's this implicit trust that everybody uses this, or we use this every day. This has a good reputation; therefore, it must implicitly be trustworthy. We embed these components into our code, and there's that implicit trust aspect. Last but not least, and probably most relevant, is AppSec fatigue. For any of you that are not part of the cybersecurity team, you may be telling me, "Celine, I keep hearing about security. Everyone bangs on about it, SecDevOps, shift left. I'm not a security practitioner, so how am I supposed to stay on top of this?" That's a really fair point. How are we supposed to do our part in security if it's not our day job. It's almost like another job on top of what you're already doing. Naturally, you don't really have the time or perhaps the resources to worry about it.
Just a little number, because I love numbers. In a Black Duck report last year, it was found that open-source software is found in 96% of commercial codebases. That's in commercial codebases. Again, just to really hit that point home about why open-source security is just so critical. Just for an example of attacks and errors. I put errors on here because, as you probably well know as well, it's not just about malicious attacks. If anybody remembers CrowdStrike last year, which resulted in banks closing, airplanes that had to land, this was because of a human error. There was a memory safety issue. There were a bunch of different issues. It wasn't necessarily open-source, but it had to do with a dependency as well. Not to get into the technicals of it, but CrowdStrike was in large part because of a lack of efficient QA and testing and all that kind of a thing.
Again, it really shows the point. XZ Utils, if anyone remembers that last year, that was a major backdoor that was found in the XZ compression, which is a package that is often used. It's used in compression. This is a really interesting one regarding the lone maintainer. What happened is a developer out of nowhere three or four years ago started to contribute to that project.
Over the years, they started to build trust. They used social engineering. They made several contributions that were legitimate. After three years, they gained so much trust that they became the main maintainer. At that point, they introduced a backdoor, which turned out to have a vulnerability severity score of 10 out of 10. The dependency confusion attacks. In the middle there, that is a way of utilizing a commonly seen tactic in security. For example, naming something malicious the same way as something trusted.
For example, left-pad, you could think instead of left-pad, it's left_pad. Therefore, as you're typing that, you think, this package here, I can use it. This is legitimate. In the dependency confusion attacks, it might not be legitimate. It might be something malicious, made to look like the real thing. In a way, kind of like spoofing you. Then on the bottom, the left-pad incident, so that's an example of an error where somebody had deleted something that React was dependent upon, and then because it deleted it, it then broke anything that was using React. That happened several years ago, but it's just a really good example, again, just to drive home the point, around dependencies. All that is probably stuff that you're familiar with, way more than I am.
What can we do about this? What can we do to start getting proactive? What can we do when there's so much to do, and you don't know where to start? Before we focus on the strategic, and the best, perfect end view, let's start with what can we do now, and what should we do? What is the most urgent to reduce the risk as quickly as possible? To get tactical, first identify what is in your environment. Perhaps some of you here are already doing this. Perhaps you're already using a software composition analysis tool, but if you're not, get one and start to look at detecting all the open-source dependencies and components that you're using. Start to look at, what is vulnerable, what are the vulnerabilities in your environment? Start to look, or continue, or mature any security reviews as part of your development lifecycle, as part of your release cycles, your sprints. It's also key to look at dev, test, and prod. Some people sometimes say, we don't really care about what's in dev, because it's not in prod.
The thing is that that's still in your infrastructure, that's still in your network, so if anything happens there, it could still proliferate into the rest of your organization, and it could still impact you. Then the prioritization. Working in vulnerability management, this one here in the middle really hits home to me, because I feel this one. I live this one literally every day. If you've got a thousand vulnerabilities, where do you start? You don't have the resources. You don't have the staff. You don't have the time. A quick little formula, is look at the more severe, like perhaps start with the criticals and highs, but don't just look at the criticality, also draw an event diagram, cross-check that with what is easier to fix. What can we fix this week, or in the next month? Then also look at the exploitability, so how easy is it? What's the likelihood of it being exploited? That's something that a good SCA tool can help you look at as well, is not just how critical, but also is it likely to happen, is it likely to materialize in your environment?
Then that will then help reduce that list of 100, down to the top 10 vulnerabilities, or the top 5. Here you go, here's your homework, first focus on those top 5. For anything that's medium lows, or anything that will take a long time, will take budget, will take three months plus of a cycle, will take UAT, for those ones, you can then at least be aware of them, and then start planning. Start planning for later on, start making a business case, work on getting budget, or work on putting that into your priorities in your pipeline.
Then assigning and escalating. This is a really key one that I don't see very often in this space, but in practice, is a way that you can work closer with a security team. If you have a security team in your company, or just an IT team in general, reach out to them, and look at what is the incident process in your organization. A lot of organizations, depending on the size of maturity, but a lot of them will typically have a general incident process, this could be if the network goes down, if the VPN breaks, if something breaks. If you get one of those XZ Utils 10 out of 10, big red button push vulnerabilities, look at what is a threshold that would make sense for you.
At what point should a vulnerability require that you push the red button, and that you create an incident ticket, so that you can at least escalate it, and get it the right attention that it deserves. I would just caveat that with saying that this process may or may not work in your environment, it really depends. It depends on your maturity. Talk to your CISO, if you have a Chief Information Security Officer, or if you have a head of security, or just one person wearing a security hat, just talk to them and see, what can we do to work closer together? Start embedding those. Define that incident response trigger threshold, and then start to embed that in your SDLC.
Just a little bit about software composition analysis. How many of you are already using an SCA tool in your organization? Are you finding that it is bringing you value, or not? It really depends on the type of tool, or at what stage of maturity you are in, or how well it integrates with the rest of your environments. An SCA can help with documenting the components, the different versions of things, the licenses in your environments.
For anyone who isn't using this in your organization, I'd invite you, as a bit of a homework, to do a bit of research on this. Ask around, ask your leadership, and see if this is something you could look into. Because this is a key way that you can more easily and quickly see what are the vulnerabilities, especially when it comes to dependencies. There's also something called SBOM, or Software Bill of Materials. Has anyone heard of this before? SBOM can also help with securing part of that supply chain. As a matter of fact, ex-President Biden, a couple years ago, had so much so that he had made an executive order to mandate the use of Software Bill of Materials. Detect your vulnerabilities. Then again, just prioritize, fix the easies, fix the ones that you can get fixed, hopefully within a week or so. Prioritize it, and then plan for the rest.
Dependabot, for example, is a quite well-known SCA tool, built into GitHub, so this one's quite a popular one, so that one can help you. Dependabot can also show you the likelihood of a vulnerability being exploited, so that can then help you with making the decision on, should we prioritize these? Should I assign these to the team, or to a developer, whether or not you should link it to a Jira issue? That can help you with that decision-making and that prioritization and that laser focusing on the importance.
Then on the bottom here, you've got an example of Black Duck. It's been a long time, but I've used this one in the past. That one, it gives you a mythical single pane of glass, where you can see the criticality of vulnerabilities, the components, licenses, and also an SBOM at the bottom, so you can just have a glance, and that can help you with that. A bit more about SBOM. SBOM sounds complicated, sounds like something from procurements, when you look at the name, it's essentially like an ingredients list. It will draw up an ingredient list of what are all the building blocks in your software. Research has found that adoption and maturing the adoption of SBOMs in an organization can help you mature your vulnerability management program. This is a basic example that I generated in GitHub. This one is in JSON format, so a little bit basic and boring looking, but it's just to give you an example of the attributes it shows. It's a packet manager, version information.
Ownership and Accountability
Ownership and accountability. Anyone encountered this issue before? This is the bane of my existence. We need to fix this. Who's looking after this? Nobody. Ask this team. Ask this other team, they say. Then I ask the other team, go back to the other team. Nobody looks after this. This is really common, especially in our modern day of the way that we do IT compared to 20, 25 years ago. Not everything is just a CMDB CI item. It's not like on-prem servers, and that kind of a thing where someone's looking after it. This is where, in my experience, there can be a disconnect between IT and operations and DevOps, software engineering. There's often a gap there. Then, how do we bridge that gap? This is where security can help towards bridging that gap. It sounds very boring, but really important and can help you with a business case, but talk to the leadership and the decisionmakers in your organization that draw up policies or formal standards. If that's too difficult or you're not there yet, just draw up your own standard. If you're a SecDevOps lead or if you're a senior developer or a manager or whatever, or even a junior developer, just draw up a standard.
For example, a standard that mandates or tells the best practices of how to use containers. At what point should you scan container images? Or, for example, it could have a line that says, before you push to prod, you must do the SAST, DAST, and SCA testing. You must at least know the vulnerabilities. You must clear the highs and criticals before pushing. Basically, have a look at what makes sense for you, and how does it fit and align with your greater organizational security policies and standards? If you have a head of security or a CISO, Chief Information Security Officer, leverage them. Reach out to them. Ask for their help, and they could help you with that. Frankly, in my opinion, if you have a CISO, they should already be asking you or helping you with that.
Then, secondly, the assigning and defining ownership. This one can be a bit tricky, especially when it comes to open source. There isn't really necessarily an owner of OpenSSH, for example, or of libraries. Start to define, how does it make sense within your organization? Who's going to look after what? In what team? It could even just be the team building a project. Collectively, you are the owner and you need to look after it. Again, leverage your policy to gain buy-in. Talk to the stakeholders in your organization. If you're the type of organization that has a data protection officer, they're going to be worried about making sure that there isn't any personally identifiable information like people's national security numbers, any other personal data in some repositories. They have a stake in this. They will want to know and they can hopefully then help you with doing something about that and making sure that things are secure.
The last one, again, sounds like a very boring term. How many of you are aware whether or not your organization has an enterprise risk register? As in a risk register but for the whole company, not just for a project. The enterprise risk register is typically used for those high-level executives. A CEO, if you have an audit committee, if you have a board, they will care about the outcome of that. In other words, it's the way that the executives will measure what is the risk and where are we within our risk appetite as a business or as an organization. It's very high-level. You're not necessarily going to stick npm package vulnerabilities in there. An executive's not going to know what that means. What you can do is you can start to leverage that to show the traceability of how these low-level risks and threats like open-source software, open-source dependency risk, that ultimately feeds into the wider organizational risk. You can start to use that as part of that.
As an example, this is a very basic version of one. In the middle, that's the big-ticket item that the CEO will care about, or your board, if you have a board. They will care about, the business could fail if there is a cyber incident. Very plain, very basic. Then you can start to work out the traceability of the lower-level threats that you face on a day-to-day basis and how they feed up into that. By starting to track these low-level threats and risks, for example, the highs and criticals that you can't fix right now because you need budget, you don't have enough developers, you need buy-in from the top. You can start to document that and then that way help to make a business case to say, this is why I need more developers, or this is why we need to look into SecDevOps and mature this out. This is why we need better tools. In this example, this person here who looks after open-source dependencies.
For example, in a risk register, what often will happen is they will include someone's name against the risk. It's funny how quickly stuff happens when someone's name is in a risk that the CEO will look at. There, as a generic, again, you're not going to put all the dependencies that you're using, but you might put, in general, vulnerabilities in open-source dependencies. That is a risk. Put maybe someone in leadership's name against it. You can agree amongst yourselves who that will be. Someone will get voluntold. Then you can see that feeds up into third-party vulnerabilities or supply chain risk. That feeds into software supply chain risk. Can lead to cyber incidents, which then feeds up into the organizational failure due to a cyber incident.
At the top, then, you can actually see, this is another example around the ownership and accountability piece, which feeds into governance. Documentation is not updated as part of joiner, mover, leaver. What does that have to do with open-source dependencies? As we all know, if somebody leaves and no one picks something up, or no one is formally looking after something, it just rots. I think we've all seen legacy, technical debt. You can see how these dots are starting to connect and how this ultimately also helps feed into preventing and reducing technical debt. In that case, there's another risk that you could track, which is orphaned code repositories. Nobody is maintaining something. That then feeds up into the higher level, lack of ownership and accountability, which then feeds into roles and responsibilities. As well as roles and responsibilities are not understood, why there's no standard, there's no policy, we don't do training, we don't do awareness.
As part of your standards and policy, you could also look at, we've got this stuff written down. It's great to have something written down, but if nobody knows where it is or how to find it, or what it means in our day-to-day job, how can we abide by that? How can we comply with our policy if we don't know what it is? Once you draft your policy and/or standard, make sure that you train. Train your developers. Train your teams. Train the security team, get them on board. Make sure they're close to you. Make sure that people understand that when they're doing a pull request or if they're pushing something into prod, make sure that they've ticked the boxes, that they've done things the way they should. That ultimately feeds into lack of governance, which can lead to cyber incidents.
From Reactive to Proactive
That sounds great in theory, but how can we do all this vulnerability prioritization, again, if our job is not a security manager or our job is not a security person? In a way, you could probably say that security is everyone's job, just like it's everyone's job not to do bribery. It's everyone's job to look at safety and to abide by the safety policies at work. People like to say security is everyone's job, but again, it's easier said than done. Draft your standards and policy. Train people. Start to look at, what are the vulnerabilities in our environments? What can we fix now and what needs to go on the back burner but needs to be done later? How can we move to a more strategic way? How do we move past that initial immature stage?
Once you've got your top 5 to top 10 cleared, as you probably well know, it's not a one-time job. You get a gold star, but you're going to have to keep doing this. It's going to have to become part of your SDLC or SSDLC, secure SDLC. How do you get to that repeatable stage? Look at automations. With an SCA tool, you could look at integrating that with what the security team uses on their regular incidents or their regular cases or tickets. Do some lessons learned. Do some post-mortems. After you've cleared something, what happens? What were the challenges? Were the challenges finding out who was going to do the task? Was the challenge that there wasn't anybody that could or wanted to do it? Was the challenge that you just don't have time? Just document that, and again, you can then use that as evidence to get further buy-in from your leaders. Continuous improvements.
The automation piece in the middle, that's really important to start reducing on the AppSec fatigue. Look at, again, can you assign ownership or can you alert the right person automatically so that somebody doesn't have to look at everything manually and push tickets to people manually? One of the key things there, in the middle here, is, start to work out a risk threshold. What that means is, see if you can automate that only criticals and highs generate an alert that then goes to somebody, or maybe to reduce the noise there, look at the vulnerabilities that have an exploit, that are likely to be exploited. Again, that way, instead of creating 100 tickets that go to somebody, it's going to be probably 5 or 2 or 3. Again, reducing that fatigue.
SCA reachability, that can help you, especially if you also use SAST. This can help to check whether your code is even using or even has the vulnerability or not. It's something that can also help you with the automation piece and reducing some of that manual noise, that fatigue. Something that you could do, for example, is use a pipeline-like process, embed it into your pipelines. For example, create a job to run SCA every single day. Automatically alert when a new version of a dependency is available, that way, the maintainer can then update. Again, this will take away some of the manual work of having to look in your SCA tool and having to look at alerts and having to make a decision. You can just automate that. To set some SLAs as well, so set some timeframes for at what point should something be fixed.
Again, if it's one of those high, critical, really bad, red button push, maybe the SLA could be a week, maybe it could be two weeks, maybe it could be three days. Or maybe some of the vulnerabilities will need a month or will need three months. They'll have to wait for an entire cycle. That can be ok too. The important thing is as long as it gets done and as long as it's planned. This is just a quick example. The tools there, it's just as an example. In case you didn't know, in this case, you can integrate Dependabot with Jira. It can either link to an existing issue or it can create a new issue. Then again, you can then assign that ownership piece. Then you can start to use that as part of your sprints, for example.
Key Takeaways
If there's three things to remember or that I hope that you got out of this and I hope that you'll remember to take with you back to your jobs is the identify and prioritization. That, of course, isn't a one-time thing. It's going to be a continuous cycle. Remember to prioritize on the easy to fix, the really bad stuff that's likely. Get a tool, or leverage, or make better use of a tool that can help you with that, so you can remove the manual work, reduce the AppSec fatigue, and get to proactive. Save you some time to innovate. Own and hold each other accountable. This is really important. Start to leverage your leadership and your wider teams in getting that buy-in and getting the help. Then to help you move from reactive, whack-a-mole, to hopefully proactive, where you can then focus more on the top, what matters, plan to rest, so that you don't get overwhelmed. Use automations to reduce the manual work, because if security is not your day job, it's got to be made easier so that you can focus on innovation.
TODO (Crawl Stage)
That's your to-do, if I could give you some homework, I'm going to voluntold you, because I'm a manager, so of course I have to. Detect the top 5 fixables in your environment. Critical, high, likely to be exploited, but also easier to fix, won't take three plus six months. Assign, so try to work out, how does this work in your current environment, the way that you do your sprints, the tools that you use? Look at automating some of that as well.
Then, again, it sounds really boring, but it can help you make the business case, draft a standard and policy, which will help you with the accountability piece, with helping train people so that they can start to learn and start to get used to this. The thing about this is that this is a culture change. It's not an overnight process. People may resist or may not be used to it, but eventually, hopefully, they can get used to it. Leverage your risk register, or if you don't have a formal risk register, just create your own, just the same way you have a project management risk register. Ask your wider teams for support. The security team is here to help, networking, infrastructure, cloud. Look at compensating controls. If something can't be fixed now, look at, can we have some web app firewalls protections? Can we do some rate limiting? Can we use Cloudflare? Can we, can we, can we? What can we do to reduce the risk, that that vulnerability that's sitting there and can't be fixed won't be exploited as easily? I hope this is helpful. I hope this will help you to embed or further embed the foundations towards better security, leaving more room to innovate as you mature this out and automate more.
Questions and Answers
Participant 1: In my organization, we're quite mature when it comes to SCA and how we handle it. There's one area that we still struggle with, and that is the ownership and accountability. We've tried different models. One is where a central code health team deal with all of the upgrades and their impacts, and the other is pushing the responsibility to the developers. We found positives and negatives to both. Developers don't want to deal with common dependencies from other teams. A centralized team have limited resources. Automation doesn't always work because if the QA breaks, you have to leave automated mode and manually look at the issue. What would you suggest as a route to good ownership and accountability for OSS?
Celine Pypaert: What would I do? I think the training and awareness piece is really key, because at least from what I've seen, and in my experience, it's very easy to just tell developers, do this, you should do this, and then it doesn't really stick. It doesn't get done. The governance piece is really important. Do you have a CISO in your organization? Do you have a DPO? Do you have a legal team or general counsel? Do you have a data governance team? Because in my experience, it's speaking to those stakeholders and asking for help. Again, we don't want to be a stick. We don't want to just have too much governance, but there does need to be at least some governance and some kind of helping unblock this stuff. For example, in my experience, I've seen where teams that are facing the issue you're facing now, go to the CISO, explain this, show the evidence, ask for help, and then the CISO will do like a reminder. Do you have a SecDevOps champion or a SecDevOps lead or anyone like that? Someone that can help as well?
Participant 1: We have security champions.
Celine Pypaert: You do? Yes. Leverage the security champions, but especially the CISO, because, really, if it gets to the point where nobody does anything, it needs to be escalated. Again, not to be a stick. Because in my experience, I've seen that if it comes from the CISO, then suddenly people start to care. Whereas before that, it's like, you're asking me a favor here.
Moisset: Let's assume, for example, I have a small team and we don't have a risk registry in place. Do you have any best practices or alternatives just to start to make sure that we're covering traceability and ownership? Any tips?
Celine Pypaert: Just start somewhere, start documenting, even if it's a spreadsheet, even if it's a page in Jira, or in Confluence, just anywhere where you can start tracking, these are the vulnerabilities, these are the different actions we need to take. The important thing is to document stuff because otherwise it just falls off the page. Very much like anyone that's worked in project management or project delivery, there's often a spreadsheet of risks, such as not enough resources to deliver the project, so that could be a risk. The key thing is that you start somewhere, even if it's not a formal thing.
Participant 2: I'm sure everyone's been in a situation where you're working on a problem, you're like, someone must have solved this already. You're going out and you're looking to assess another npm package or a Ruby gem, or something. What things would you be looking at? I typically look at when was it last updated, frequency of commits, number of contributors, and the dependencies that that has itself. Are there any other things that you would look at or tips?
Celine Pypaert: Do you mean other attributes or other things to look at?
Participant 2: Yes.
Celine Pypaert: The number of maintainers or contributors is a key one, because it also helps with informing how big or small your team is to look after it and to do something about it. The latest version is definitely important. It may be an obvious one, but when it comes to vulnerability, it really is about maintaining it to the latest version or at least the latest possible. I know sometimes with certain application dependencies, sometimes you can't have the latest version because it might break something. Yes, ensuring that you've got the latest version possible is important. That will then help also with automation, will help be able to feed any kind of alerting on, there's a new update available, you should update now. Obviously, the vulnerabilities. Just being able to draw a map of your supply chain, because we're each part of a supply chain. We're that link in the chain for our customers or our internal users of the applications that we work on and maintain. If you're asking about top things, vulnerabilities, obviously, goes without saying, versions. Then, yes, like you said, the team members and who looks after it. That would probably be my top three.
Actually, another one, and this is something that I do myself, is not just the dependency in and of itself, but also where is it used, which applications. Then, finally, in which infrastructure? Is this something running in our VMware environment? Is this something in AWS or Azure? Because we have to look at the full infrastructure, like the IaaS, PaaS, or infrastructure as code, or whatever, containers, Kubernetes nodes. We have to look at the entire picture of where it sits, because it's not just about the entity by itself, it's which application and the infrastructure underlying, because you also have to maintain the infrastructure. That may be the job of someone else in your organization, but yes, that's part of the attack surface.
Participant 3: You mentioned that secure your own part of the supply chain, but, for example, if you have an OS policy for, it needs to have a maintainer, it does not have vulnerabilities, but like left-pad, if that's a transitive dependency, the alternative is not to use React. Build to React yourself is not, I think, the solution. How would you secure your part of the supply chain if you cannot manage that?
Celine Pypaert: That's a good point, yes, because that's part of the supply chain, but like you said, before us, because that's their part of their supply chain. That's the interesting thing about supply chain, is that it's like a multi-chain thing obviously. What you can do is, again, going back to the underlying infrastructure. For example, like you said, you mentioned left-pad, let's say you've got a React vulnerability, you can't control that, but what we can control is the infrastructure that we are looking after. Let's say, our cloud infrastructure. It necessitates an attacker mindset, so thinking about what can we do to reduce the attack surface so that, yes, we have this vulnerability sitting here in React, but what can we do to make sure that our application isn't sitting on something that's internet-facing and it shouldn't be internet-facing? Can we close that down? Can we make it harder for it to be exploited in our environment? Can we make it harder for our application to get exploited? It's a bit of a difficult one.
Participant 3: To focus mostly then on reducing the exploitability.
Celine Pypaert: Yes, exactly, reducing the exploitability as much as we can. Then, of course, depending on the nature of the dependency or the vulnerability, if it's something that breaks React, then obviously there's only so much you can do. Your application, it's not going to work because React isn't working, so it depends on the vulnerability.
See more presentations with transcripts