Transcript
Ali: My name is Asra. I'm going to be talking about achieving SLSA certification with a bring your own or BYO framework. It's a lot of jargon right now: SLSA, BYO Builder. We'll get to that and what that means. I'm a software engineer at Google. I mostly work in the privacy and security space. Currently, I'm full timing on a Fully Homomorphic Encryption Transpiler. On my side time, I work on open source security projects like this one. In the spirit of staying open source, and most of my projects are open source, I'm going to be talking to you about open source SLSA certification and what that really means.
Layout
I will talk a little bit about what attestations are, what SLSA is, what it means to be a SLSA certified builder. Then we'll get into the fundamentals of building something like that. Building an attestor, which can do the thing that we want it to do. I'll also talk a little bit about the BYO Builder. This bring your own framework that I'm going to be discussing and talking about. The goal for this is more to show you what it takes to become a builder and the intricacies and requirements it needs in order to actually be a SLSA certified or trustworthy builder. I'll show a quick demo. Then I'll wrap things up from there.
Why Attest?
Why attest? Ultimately, our goal for this talk and for SLSA as a project is to establish trust in a system or platform, and then be able to automatically verify claims from that system that were produced by that. To enable verification, and enable automated verification, so verification that people will actually perform in large systems, we need to be able to produce data. Not only do we need to produce that data, but we also need to produce trustworthy data. Let's first talk about what the target of that data is, and what we'd want to verify from that. This is a pretty classic software delivery pipeline. Maybe some of you have already seen it before, if you've heard the word SLSA before or heard the words attestation, or similar sorts of frameworks. It's a pipeline that describes what are the stages of software development, starting from that developer through the source control repository, through the build platform, to where it gets deployed, and then that final resource and where it gets consumed.
Each of these little arrows in the pipeline represent a different link involved. In each of those links, something can go wrong. Exactly those things that go wrong are the things that we want to prevent. Unfortunately, only one thing needs to go wrong for this entire delivery pipeline to get compromised or fail. In the case of most defensive security, you need to be protective over each of these links. It only takes an attacker to compromise one of those links to become successful. Naturally, you have a difficult job. For example, let's take maybe this compromised build system at the top, above build over there. At that link, what it's trying to describe is perhaps an attacker or some adversary, or even perhaps like a compromised internal person, injects a malicious artifact into the build system, and bypasses what you normally would have expected the build system to produce. Another one is maybe your source control repository gets compromised, and maybe a bad source gets injected into your build pipeline, and then you don't have the artifact that you expect it to have. Each of these points require some protection to guarantee that what you wanted from the previous step was actually what you got. Then in that case, you can perform the action that you need to perform, whether it's perform a build, perform a test, perform a dependency scan, and then you can proceed with the next step. Unfortunately, these sorts of bad links do happen in practice, lots of things have happened in terms of compromised build systems, for example, the SolarWinds attack, or potentially injecting bad dependencies happens quite frequently. This is a pretty classic framework for thinking about where the risks are on a software delivery pipeline and in that lifecycle of software delivery.
On the flip side, if we do have all of these links that we want to protect against, there's also something that we can do to mitigate some of that risk. That mitigation is some kind of validation or verification that you can perform at each of these links. Again, to validate that that previous step produced something that you as someone in this pipeline expected. For example, maybe the source control repository, if you want to secure that step of your pipeline, you might say, let's ensure that each developer has authorization to submit code in here, whether that's maybe two-factor authorization or some other mechanism for IAM. In that case, you'd be securing who is actually contributing that code to your source control repository. That's just one step out of all of these things. Again, maybe one thing you can do to mitigate what artifacts are produced by your build system is to sign them. Zack and Billy gave a good talk about signing software. That's exactly some of the tools, like Sigstore that you can use to validate the step from build to deployment. Again, like I said, all these have some expectations involved. The deployment might expect a certain build system to have produced an artifact. Or the source control repository might expect users that have certain credentials to be submitting code. There's always a notion of, what event happened? A user submitted code, a build forgot performed? There's always on the flip side an expectation that some properties of that event were true.
Basically, I've made the case of, why do we need this data? We need this data to actually perform these validations on. If we have data that says who was the user that provided the code, then we can effectively make the statement, did that user have authorization? This isn't just relevant for the particular build pipeline that we have here of like your internal builds. What are we doing at each step? How can we ensure that nothing was compromised during our software delivery pipeline? It's also equally as important if not more important for end users who are consuming software, even if it's not you yourself. An end user who might want to be downloading some artifact might care that the author of the artifact was such and such organization, or so and so author, or whatever it is. Not only is this important for an internal software delivery pipeline, between all of these steps, but it's also important at the end. Likewise, I think Billy and Zack also gave some examples for consuming some of that software and consuming some of those resources.
Again, in all of these cases, we had some data involved. That verification of that data, what we're looking at is taking some data called an attestation. Generally speaking, for all of these types of events, attestations apply, attestations are some proof of an event. That can capture what happened at the build step, or what happened when we pulled in dependencies, or what happened when a user submitted some code to the repo. Basically, the attestation makes explicit the claims about what a user performed. It lets you capture, what are the inputs to that event? Where did that event happen? What process was taken in that event, was it a build, was it a submission of code? Likewise, what is the output of that? In thinking of the software delivery pipeline, now you have these chains built of inputs and outputs to each other, and processes that happened in between. What we want to capture is, what was that event that happened? Yes, it depends on each one of these little links. Let's take an example for one piece of that software delivery pipeline. If we consider the stuff that's pulling in dependencies, whether it's a tool chain that you need or some tools that you need to compile your code, or whatever it is, or maybe it's packages you need, so on, into your build, one risk that you have is injecting potentially vulnerable dependencies or dependencies you didn't expect. In this case, what you could do is you could configure an event to happen, you could configure a dependency scan to happen, that could go and check whether you're pulling in or using some kinds of packages that have publicly known vulnerabilities. Or perhaps you might even be able to scan whatever dependencies if you're building them from source, you can scan their code and maybe detect something like that. That scan can produce a report, maybe a vulnerability report, a vulnerability scan report, there are some formats out there for creating some of these. That report can be consumed by the build step before actually triggering the build and saying, let me go and check whether the dependencies I'm about to pull in are safe. Ok, they're great, let's proceed. That type of pattern of automated verify the previous step, ok proceed, is exactly the type of thing that we want to do.
I want to give a quick shout out to this type of thing, and that is to the in-toto project. Marina has a talk about securing the supply chain with a variety of different tools. She's going to talk about how to complete this picture, and in-toto is one of those pieces. in-toto defines attestation types as well as ways to describe those expected values. If you're a software producer, you might denote, ok, at each step, I'm going to produce these types of attestations. Between these, I'm going to describe my expected values for those attestations, going to describe perhaps the expected build for a production build, that might be different than the expected build system of a testing build. This is going to be basically the format for our data. Now that we have that data, if you can't trust the data, is it really data? You can produce as much data as you want. I can tell you as much information as you want. In the end, if that information isn't really accurate, or if that data isn't really true, maybe it's not even really that useful to have in the first place. That being said, baby steps at a time, let's produce the data first, and then maybe let's go and ensure all of its security properties. If we want to create the most useful tool or system that can produce some of this data, the best-case scenario is that this data is going to be trustworthy. We want to have a way of evaluating whether that data is trustworthy or not. That's going to be part of what I'm going to talk about, how do we evaluate, what sorts of properties, what sorts of requirements do we need these attestations to have in order to call them trustworthy?
The first one is, if we really want to trust that an attestation or a piece of data came from a certain process, then we should definitely make sure that no one can, in between, manipulate or modify that data. That aspect of tamper-proofness, of the fact that if the source control repository, let's say that there's a process running on that system, if it produces some data, did it really come from there or did it come from your local hacker? Or if your build system produced some data, can it also produce some data that isn't going to be manipulated by another party? We need some way of ensuring that whatever data that it produced is properly tamper-proof and signed. That way it prevents rewrites. The second property is authenticity. This allows an end user or some other actor in the pipeline, some other process to decide whether or not the data was produced by a certain author. In this case, this is really that example that I keep on bringing up about, did the production build system produce the production artifact? We want a way of identifying production build system that differs from testing build system or staging build system? We need a way of identifying who produced that attestation, in order for that policy to say, yes, I trust these authors of this attestation, or, yes, these are the authors that I expect. Yes, this is usually given in conjunction with the integrity property, usually given by some signing with key material that can be identified to the author. This again echoes back this tooling of Sigstore, or other ways of signing software that have some identity involved.
The final property, which is a little bit more tricky, I know many of you have heard of code signing and other sorts of signing before. Those integrity and authenticity properties probably seemed familiar to you. This final property is called non-forgeability. This property is a little bit more nuanced. What it states is that the attestation content is not able to be manipulated by either the process actually running in that environment, or the user that actually triggered or was in control of that pipeline. What this is aiming to prevent is really insider risk, on one hand. As in someone who triggers a build can't be able to say, "Yes, this is what happened," and it's not actually what happened in the build system. "I triggered the build, so trust me, I know what's going on." What it's trying to mitigate is that insider risk, but also risk of the actual process that you're running manipulating that attestation content. This is truly like zero trust in the people performing the action, what actually happened? If you're thinking, how is this possible to happen? You're thinking of this as some trusted witness or something baked into the platform itself, that would be able to attest to the action performed there without being manipulated by the action or manipulated by the user. In practice, this is going to be the hardest one to achieve. Think of this kind of like, if a build system integrated some signing mechanism where you didn't have access to the signing key, it just signed whatever result you wanted, you wouldn't be able to go and take that signing key and do whatever you wanted with it, even though you were in control of triggering that build. This is something that's really in control of the build platform itself, or maybe the process that's actually performing the action. That one's a little bit tricky to understand, but I think we'll dive more into that one.
If we can achieve all three of those properties, integrity, authenticity, non-forgeability, then we can pretty successfully say we have a trustworthy attestation. This was tamper-proof, this was an attestation we could tie back to an author, and it was an attestation that wasn't even manipulated by the event occurring at the time. It's a truly third-parties your trust witness to this event with no forgeability involved. If you can do that, so if you can, one, verify integrity and authenticity by verifying a signature on that attestation, or metadata that describe the event, and you also verify the identity in order to decide, yes, that identity, I know they're attestations to be non-forgeable, then you can feed that into a policy engine, and then you retrieve your result. Do I want to proceed with my build, or do I not? Yes, policy engines basically can evaluate that data once we know it's trustworthy, and compute a proceed or non-proceed result.
Is This Immutability of the Build System or Data?
Ali: Is this immutability of the build system or immutability of the data?
No, it's not immutability of the data. What I'm saying here is that whatever procedure is responsible for signing that attestation is independent or out of the control of the users using the system. It does require having like an ephemeral clean environment. You cannot have an immutable environment if you want this property. I think it's stronger than that. It's stronger in the sense that users who are using that, yes, have no access. I think of clean more as ephemeral. Cleanroom model here, basically, it's non-forgeability intends that no users, whether they're insiders or outsiders are manipulating this environment.
SLSA (Software Levels for Supply Chain Security)
Yes, three properties here. What this is captured by in the sense that I've kind of built up what a trustworthy attestation is, and what a trustworthy, cleanroom build is, the SLSA framework. SLSA is Software Levels for Supply Chain Security, that describe incremental levels of achieving this best state scenario of clean builds, of trustworthy attestations involved, of being able to say, yes, this is actually what happened. It describes a format for build attestation, so build attestations are something called provenance as well, like provenance of an artifact. Then it also describes the ways that you can incrementally secure your build, so ensuring that it's isolated, ensuring that it's ephemeral, and so on, and ensuring that it creates this non-forgeable attestation.
Building An Attestor
How do we get started with this creating the build? The main crux and idea of my talk is how do we gain this property of non-forgeability, and in a way that is open source and friendly to use? Even if you're not using an open source build platform, and you can do whatever you want to your build system, I'm going to try to hopefully convey to you the ideas of what you need to ensure these properties in order to create this system. Let's look at building an attestor. An attestor is a builder that can perform these secure, non-forgeable attestations. The attestor should definitely provide authentic, tamper-proof evidence to the event, so that includes signing the metadata of the event that had happened. It must be independent and isolated from the event itself, so that attestor part of it must be produced outside of the process that is performing the build. It also cannot be impersonated. This goes back to this authenticity piece. Obviously, if I can impersonate my attestor, then my attestation is no longer that useful anymore, because the impersonated actor can go ahead and sub in for that. If a build platform doesn't natively support attestations, which the good news is, there are some that are trying to integrate this, GitHub and GitLab in particular are trying to integrate these properties directly into their build systems, or CI/CD systems in general, in order to produce those trustworthy attestations for you without you needing to worry about what happened and how they architected it. I promise, they're going to be basically following the model that I'm about to show you. If a build platform doesn't natively support attestations, are you just out in the wild? How are you going to architect this? The good news is that there are models and ways that you can ensure this on some CI/CD platforms. The world is not lost. How do you ensure this non-forgeability when you have to decouple the logic of creating the attestation from both the build itself, the procedure you want to attest to, and also from the user itself? Both of these can surprisingly be achieved with an open source build system.
What we can do is build a secure attestor on top of the build platform primitives. Let's see how we can set this up. Let's start with what we have at our disposal from a build platform. Let's assume the build platform has the ability to create isolated VMs and ephemeral VMs. Since most build systems, GitLab, GitHub can do this, it can configure containers to do some action. That gives you your isolation and ephemerality, so you can create these ephemeral containers. That also gives us the separability between processes that we do want to separate, which in particular, in this case, is the creation of the attestation with the event that occurred. First, we need to ensure that the actual event occurs, let's not lose sight of building an attestor means we still need to actually do something here, whether that's build something, test something, run a dependency scan, and so on, the event needs to occur. That will call the build, so that's going to perform one action. In each of these solid, rounded boxes is a single VM performing an action. That's the actual process. It's going to take in system parameters, which are trusted build system parameters. These might be environment variables that are present in your container. It's also going to take external parameters, which are, let's say, user inputs. Maybe some parameter on a build, and so on. Both of these parameters together will result in an output, which might be again, some resulting dependency scan, it might be a build artifact, or so on. Next, we'll need some VM to actually create the attestation. Again, the key property here is the attestation generation is happening, separated from the build step. That attestation procedure must be able to know about what happened in the build in order to create something useful for it. It at least needs to know, where did the build occur? It occurred in this build platform that I'm in right now. It needs to know about those system parameters, which it should have access to, because that's part of the build platform. It should be able to know what those external parameters were that it fed into the build procedure. There's some architecture that needs to happen here on how it can introspect basically on what those external parameters were in a way that does not make it manipulatable by the build. Once the attestation is assembled by this attestation procedure, it can then be signed. Remember signing is now needed in order to ensure the authenticity and integrity portions of this. In our case, in the deployments of these sorts of frameworks of attestors that we have, we use Sigstore. What Sigstore gives us is a machine identity involved in signing these attestations. This means that we have an authentic, non-impersonated identity signing the content of this attestation. We have authenticity, integrity from signing. We have a separated isolated procedure that can attest providing us the non-forgeability. Then the final piece in order to ensure that that attestation logic cannot be manipulated by the user, we need to capture all of these three steps, the build, attestation, and signing in a separated, isolated place from the user, which I think is the right word for this, is in a cleanroom, basically, in a clean pipeline from the user. The user cannot go ahead and just with their external parameters, manipulate the signing key or manipulate the action that is creating the attestation. This internal portion captured inside the dotted line exists in a boundary away from the user. The user cannot configure anything beyond those external parameters. The signing, attestation generation is happening away from the user.
All in all, again, we're capturing all three of these trustworthy attestation properties. What we have to do also, in order to ensure that this works out correctly, this is at a very high level, again, is ensure that the data that's going between the builds and the attestation and from attestation to signing and your outputs are all occurring with some secure channel. What we especially do not want is for the user to be able to, let's say, manipulate the transfer of data between the build and the attestation container. In that case, all would be lost. Then the user could make the attestation process attest to whatever it wants. With all three of these, and with the underlying assumptions of that build platform having isolated ephemeral containers, we can make all this happen. This was a very high level. Yes, you're probably like, what are all the details here? The details are tricky. The idea is, implementing a builder like this is very tricky. We've done it, and it takes a couple months. What we don't want is for developers who don't care about this sort of thing to be doing this. What we want is for developers to be able to say, let's go kick off a build, or let's go kick off a dependency scan, and not have to worry about, I have to go and architect my build platform in order to produce attestations. We want that to happen out of the box.
Dealing With Asynchronous Structures, and Tests, and Attestations
Ali: How do you deal with asynchronous structures, and tests, and attestations?
In the current way of implementation that we've run it, we actually have to wait for all that data to totally be retrieved in order to attest to the output of it. If your output is, let's say, the resulting coverage, you would have to wait for that coverage to happen. For example, if you architected this in a way that says, let's just attest on the commitment to creating that, yes, you can do that. You need to really ensure that the end result data is not able to be swapped out or tampered with.
Building an Attestor
Building this attestor, it is very hard to do. Yes, we haven't even covered all the possibilities for creating some of these attestations. There are some baseline requirements for the structures that we can attest to. We don't want developers to be introspecting or dealing with thinking about these sorts of structures. The problem is we want to capture arbitrary build processes or arbitrary logic in here. Whether that's creating a scan, whether that's creating a build, or whether that's documenting who submitted code. Code submission as an event. We don't want to have the attestation step to know really what that build is, but we still want to capture enough information about the build in order to decide, was this a build event, or was this a source code submission event? There's a really tricky balance in order to be able to describe what build occurred without receiving information from the build process that we do not trust. Let's talk a little bit about that.
The BYO Builder
This is where the BYO Builder comes from. Originally, the backstory here is we started creating some of these builders, but they were very specific to build logic. We created a builder for Go projects, or we created a builder for Python projects that was very specific to Go builds and Python builds. It was able to understand only those types of builds because the parameters that it exposed were only parameters like a configuration for a build for Go. What we want now is not to have to create so many builders, we want you just to be able to run whatever you want and be able to attest on that. That's what this framework does. The implementation of this framework is built on GitHub Actions. I'm going to talk a little bit more specifically and show you the demo on that. Essentially, we want to create this SLSA attestor for any type of build. We don't want a hardcoded build involved in here. The general framework can work on top of any build platform that exposes those build primitives that I explained before. This implementation is going to be on GitHub Actions. In GitHub Actions, containers are isolated, VMs are called jobs. If you're familiar with GitHub workflows, you can configure GitHub jobs in your workflow. The idea, in this case, in order to create a general-purpose templated attestor, so one that someone can just configure for their own build, is that tool owners or people who are configuring this attestor can provide a callback to whatever build they want to do. This way, the attestor knows where that callback was located, and would be able to get a mutable reference to where that callback was, which in this case, is going to be a GitHub Action. Then the attestor would be able to attest to that build being run in that callback with specific parameters supplied. We basically are abstracting away what that build actually was, instead of it must be a Go build, or it must be a Python build. Now, it's a callback that you can provide when you're building your attestor. You don't need to do any of the rest of the steps, you just need to plug in your build.
On GitHub Actions, what we're going to do is create a template build or a BYO Builder, that's implemented as a reusable workflow. This gives us the layer of separation between the user and the actual steps of performing the build attestation and signing, and a reusable workflow. Let's say if you invoke a reusable workflow, if that reusable workflow is hosted in a different org, you don't have access anymore to manipulate that. You have access to whatever parameters it exposes, which in this case is going to be a callback action to the build. When you want to build your own attestor, when you want to bring your own builder, you provide your path to your GitHub Action as a callback, and the BYO attestor attests to that. Therefore, we basically get, build your own builder. In this case, what we have is the right separation of concerns here. We have us as the BYO Builder producers can handle the attestation logic, the signing logic, the logic of data transfer between the build and the attestor, while you the tool builder or tool creator or workflow creator can just sub in your build step. Then users just invoke the build. They don't have to worry about any of the other steps here. If all this is a little bit confusing because now there's three layers, there's the user layer, there's the tool builder layer, and then there's the underlying BYO Builder, I will show you in the demo how this all works.
Let's look at an example attestation that was generated. This uses the SLSA framework attestation formats, which in turn use in-toto's general attestation statements. They denote again, authorship, the event that occurred, the environment that occurred in, and the outputs. Specifically, though, I've zoomed in on the builder ID, which is the author of that builder. In that provenance or in this attestation, in particular, it's going to be a reference to the tool builder. The person that distributed, here's a SLSA attestor that can produce a Java build with an attestation. It is not the underlying BYO Builder, which again is not what we want, you want the authorship of your build to be the build that you actually created. The build type is our delegator build type, which is another word for BYO Builder. This allows someone to say, ok, this came from something that was built on top of our framework. That gives them a little hint to understanding what the parameters were. It also records all the input parameters. It also records the system parameters as well. Basically, the goal of this, all in all, it enables third-party tool builders. Let's say I have a dependency scan to now produce dependency scan with SLSA attestation, or I have a Java builder, GitHub Action, and now I can create a Java builder that also produces SLSA attestation. These tool builders, or these people creating these tools now have the infrastructure they need to go and just plug and play in order to get SLSA attestation for free. They don't need to recreate any of the rest of the steps. They just need to provide their GitHub Action. It also means that users who are using their tool need to trust those people rather than trusting some underlying thing.
Callback Flow
Ali: Essentially, they plug in their callback action to the build. Then we simply require them to say, where was the output located? Basically, in order for some general attestor or witness to say, that was the thing I was interested in, the build has to say, I just produced a results.sarif, or I just produced a dependencyscan.json. Yes, that's the one thing that we do require from them. They do need to carry back that information.
BYO Node.js Attestor
As an example, one of our first implementations of actually using this framework was a Node.js builder. This builder workflow would be able to build npm packages, using scripts defined in your package.json. What this would do is we only needed to create a Node.js builder GitHub Action, and then plug that in to the BYO framework. The attestor takes care of the rest of the stuff. We get out as a result, the build package, and then maybe that workflow not only can it call the BYO attestor to create the attestation and produce the build, but it can also go and publish the attestation with the package on some registry. It's just plug and play. All you have to deal with is your build logic and the output that you want. For the user of the workflow, again, all they would need to specify is the parameters exposed by the builder. They don't have to know anything about attestation generation, which is perfect for them. They cannot manipulate the signing key. This is them calling the Node.js builder.
Demo
There is a template on this GitHub repo over here, https://github.com/slsa-framework/slsa-actions-template, that you could go to and demo that to create your own attestor. Whatever GitHub Action you have, you can go and plug this into the framework, and now you have a workflow that does your GitHub Action and also produces the attestation. The goal is to create a my custom tool attestor, so I'm going to try to create a workflow that calls my BYO Builder with some internal action. I'll need to do a couple things in order to achieve this. I'll need to manipulate that internal action wrapper to call my custom tool. Then I will need to do that one thing I mentioned earlier, which is tell the workflow what I want to attest to. What is the output generated? Because as you can imagine, builds have plenty of outputs, maybe they have debug files, maybe have this. They do have to specify what is the output I care about.
The first step in our demo is I'm going to show you the action that we're going to target here, and that is the OSSF Scorecard Action. This is a simple action.yaml file in GitHub Actions, and all it does is run the OSSF Scorecard Action, which produces some metadata about the security of your repository. It's not a dependency scan, it's a code repository scan. It'll alert you if there's, let's say, workflows in there that have bad practices, or let's say that branch protection isn't enabled. It's a scanner for creating data about your source control repository. That's the action I want to target. I'm going to run my action here, and I will produce a scorecard results.sarif here. Not too important what it is, but like I said, it's going to produce some data about code health of your repository. It produces some SARIF results, which is GitHub specific coders that scan results. Now I want to add SLSA attestations to that action that produces the data because I want to say, yes, I am a trusted system that produced the source control repository scan, which means it's unspoofable, which means I can go and trust the results. To start, I'm going to clone this SLSA action template repository, that has the QuickStart instructions in here. I'm going to follow the steps over here, which is, I'm going to copy the reusable workflow from the cloned template repository into my repository. I'm also going to copy the action wrapper template. There's a couple steps here. This one is not so important.
This is the actual workflow that's going to perform the scorecard scan with SLSA attestation. I'm going to remove the template, workflows inputs, because I don't need any inputs here, it's just going to run the scan on the repository that is running. If you see over here, I have a SLSA run step. I don't need any secrets or anything, so I'll delete those. I also have the reference to my build action. That's this internal action wrapper. That's the callback action that's going to be called. Let's go and manipulate that. This is the callback action, the internal action wrapper. I'm going to modify that in order to call my action. Path to the action happens to, by default, be at the root, so I don't have to manipulate anything over there. Note here, we have to create a JSON file that describes the attestation we want to create on the output. This is going to describe, what is the resulting untrusted output file we want to attest to the results.sarif. That's all the steps that needed to be manipulated. I just needed to say where was my action, and what was the thing I wanted to attest to.
In the test repository, let's now create the workflow that calls this SLSA 3 reusable workflow. It's going to call the SLSA 3.1, and then let's see the results of that. If we check out the results of this one. I'll unzip this, get the SHA of this file. What we want to do is basically say, did my attestation reference the results.sarif? If I take my SHA of my results.sarif, it's like F14. Now, if I go and look at my attestation file, you'll find that I'm attesting to the digest F14. This attestation is correctly now referencing the resulting SARIF. You can also go ahead and verify the signature on this file.
Conclusion
Basically, attestations creations are hard, but there's frameworks for doing this. Ensure that your attestations if you're using them have these three properties. Yes, there's some more limitations, even more.
How to Detect Injection After Attestation Creation
Ali: How do I detect whether something was injected after the attestation creation?
On that previous GIF right there, the attestation references the digest of the resulting output. If you go and swap the output, let's say you go and swap the build artifact, your attestation will no longer reference that tampered build. In your verification procedure you, one, have to verify signature, verify the identity of the producer of the attestation, and then also verify that the attestation is correctly referencing what you have provided. You obviously have to tie the attestation back with the artifact.
See more presentations with transcripts