Transcript
Shafer: I'm Andrew. We're going to have a little walk down memory lane. This is history of infrastructure as code. I like to start with definitions. We're going to talk about language. This is the language of infrastructure track. Language is any system of formalized symbols used or conceived as a means of communicating. Then infrastructure is the basic underlying framework or features of a system. Then code is a set of symbols that can be interpreted by a computer or a piece of software.
Language
I want to frame this as clearly as possible. I'm fascinated by language, all sorts of languages: computer programming languages, spoken languages. If you don't know what that means, then go read about language-games. Because language shapes thought, and thought shapes language in this interesting way. We don't have the time to go into the Sapir-Whorf hypothesis, and some of the different research that has been shown to be in favor or against some of these different framings, but it's just a fascinating study, at least for me, and how I think. I think in words often, and so then when I learn new words, I feel like I can get new thoughts. It's also just a fascinating thing to think about the humanity of language and how all these things grow from each other, and the history of language, at least according to the people who study such things more intently than I do, they really come down to some common things. It doesn't seem like it on face, but according to the studies, Hindi, Russian, Spanish, and English are all from this common Indo-European ancestor. I bring this up, because I think there's some common themes with the way that the language frameworks will evolve or have evolved till now.
Another quick aside, and this also is a part of my fascination with language is that there's research that suggests that an average human adult can be fluent in a new language in six months. They can be fluent in six months, and they could be native level speaker in two years. That is provably true. It's only true if you create these necessary preconditions about how you're going to approach learning it. Then this word necessary is key because you will only learn languages to the level that you need to. I think that's also true to some degree with the programming languages. We'll learn the language. We'll learn a competency with the language to the level that we need to for the task we're faced with. This is one of the earliest examples of writing and languages recorded. I think it's funny, because it's a table outlining beer rations. I don't even drink, but I think it's funny because it's like, apparently, we drank before we wrote.
What Infrastructure as Code Means
What does infrastructure as code mean? How do I think about this, or how should you think about it? To me it's implying that the infrastructure itself is an application, which means that we can use all our software tricks. We can start to think about software development lifecycle, and version control, and testing, and all this stuff we built up around the process and the agile engineering or what have you, whatever your favorite tricks are. Then it also means that we should be wary of some of our software pitfalls. There's a lot of stories where people told their infrastructure as code to do a thing, and it did exactly what it was told in that it took down everything, because it didn't go through the lifecycle that you want to with respect to testing and the rest of it to warrant putting into production.
Kernel, Drivers, Processes, and Threads
I also think it's interesting when I got this topic, and started thinking about it as like, where does the software start and the infrastructure begin? Where's the separation, because it's basically like at the very bottom, there's some silicon. Then it's basically software. It is abstractions pretty early in the process. A lot of what we call infrastructure, if we're working on business domain applications, is really software, and it has been for ages. The kernel is software. All these drivers are software. The provisioning of the resources to start a process is an infrastructure thing, but it's the software process. I don't think we should forget that. I think if we internalize that, and the more we understand the things above and below the layer that we're working on, the more we can take advantage of the opportunities we have with the code we're working on.
In the beginning, at least in the Unix world, there was the Bourne Shell. This is the interface into the computing, into the processing power. You have your .sh, and you write some stuff, and you tell the computer to do it, and it does it. In some sense, you could argue that this scripting was this primordial infrastructure as code. From there, for a hot minute, we could have had Lisp machines, which is funny in its own way, but it was true. That didn't last very long. Then there was Perl. If you've never seen xkcd, then you should definitely go read some. The punch line is, God says, honestly, we hacked most of it together with Perl.
The Configurators
This is the story that most people associate with infrastructure as code. At least most people in my circles are this beginning of a conversation. There was a thread recently on Twitter, where people were trying to say, what's the origin of infrastructure as code? We didn't really get to an origin, but we know that we were saying it a lot around 2009, 2008, 2007. I know I said it a lot. There's this long history. The first version of CFEngine is 1993, which is so long ago. Then you come through this notion of infrastructure as code, but in this context, what infrastructure as code means is really the configuration, like the proper care and feeding of servers. Keeping servers properly configured through the lifecycle of that server. At least for this paradigm, or this generation of the tools, the server is the highest level primitive, and then any orchestration that you want to do between them, you have to accomplish outside of the abstractions that are provided by those tools. Then it's also interesting, when you look at this, that CFEngine is written in C, then you see these other things getting written in these higher level languages. Puppet was actually an attempt to write it in Perl, and then Python. Then finally, Ruby was just the language that Luke was able to express the abstraction that he wanted to.
Control Loops
The construct that most of these are built on is this notion of a desired state. I'm going to come back to this later, because I think this is important. It's also the basis for lots of stuff with robotics and other types of controls. The framework is saying what the desired state is. It's comparing the desired state to the current state, and then there's some logic built into the framework that tries to bring the current state into the desired state. There's also this notion of it's like philosophical discussion around imperative versus declarative. On some level, the way that you get declarative is by getting the right logic to go from the current state to desired state. In reality, that's always imperative. That's the little trick. You have declarative statements in your language, but reality has time and state, so then you end up with this imperative work to do that convergence. Which will be important in a minute, because we're going to come back to control loops.
The Isolators
Then, in parallel to this, there's this other story, which I'm conflating at least for the sake of this, the containerization and the virtualization, the VMs, as two things that are doing different types of isolation. What it means is a proliferation of configurations that need to be managed. They both have different qualities with respect to the strength of the isolation and the speed that they can be started, and how you can move them around, and different things. The features of containers will get more important, a little bit later in the story. It's missing lots of other parts of the story, but this is representative. The first virtual machine, the IBM back in 1972, they proved virtual machines, but the hardware itself didn't get fast enough to take advantage of that until about 2000. Then you see this explosion of all these technologies.
The IaaS (Infrastructure as a Service)
Then as a consequence of that, and I think this is the notion of things building on each other, you see the real explosion of virtualization began around 2000. By 2006, Amazon launches Amazon Web Service. I think it's also interesting, that by 2004, 2005, Amazon, Google, any of these, what we call cloud natives today, could do that API driven provisioning inside their data centers. Then, 2010, you could basically give an hour long talk on HashiCorp, and I feel like if you have the infrastructure as code lineage with the configuration management from CFEngine, through Puppet to Chef to Ansible, then Vagrant is its own thing. The metaphor I would use is Mitchell Hashimoto is like Hamilton, he's just writing like he's running out of time, coming up with all these crazy tools and abstractions. It's really fun to watch. I used Vagrant so much to test Puppet. It's like being able to provision these machines on the cycle was game changing.
The Provisioners
You have this notion of provisioners. These are these tools that come into the cloud providers, and they're able to build infrastructure. Maybe there's some other ones that are out there, but Terraform is the one. That's the one that almost everyone seems to use. Then Pulumi is this new entrant that seems to have some momentum as well. That didn't seem to have the Cambrian explosion that some of the other tools did. Also, around this time, 2015, you see a lot of the shift towards Go. People are starting to write the infrastructure as code in the Go language.
IaC Moves Up the Stack
This is a quick introduction to Wardley. There's this notion, what I consider like these building on abstractions, is things moving from this notion of the genesis, custom-built products, up through services and eventually become a utility. You could argue that this is the story of cloud computing, as computing is moving through this S curve and becoming a utility. Infrastructure code is moving up the stack. Now you see in the cloud providers, all sorts of these high level services.
The PaaS (Platform as a Service)
There's this other stream related to this stuff we're talking about around platform as a service. You see this timeline of these features where you're able to just push artifacts into the framework. I showed the infrastructure as a service a moment ago, which was provisioning the VMs, but both Google and Microsoft's first foray into cloud computing was really platform as a service. This is the Heroku propaganda that the Twelve-Factor app, to some degree, if you're really paying attention, implies the existence of this Twelve-Factor platform that's going to keep all those promises. If you think about what those 12 factors are, then it's like if you're going to write to a data service, there better be a data service.
Docker Mania, 2013
Then there's Docker. This is game changing, because there was always containers, and people were using them, OpenVZ, and LXC, and the rest of that were things that we used. We built systems with Puppet and Chef that use those as well. One of the things that I feel was really insightful was marrying the artifact of the deployable named shareable image with these defaults that made it accessible to any developer. Before Docker comes in and makes it accessible to the average developer, it's difficult and requires a lot of expertise to start a container. Docker is basically fixing a usability bug with LXC, and all of a sudden, any developer that could install this package can start containers, and not just start them and use them, make them, but share them. That ability to socialize and share the containers was also very game changing, and I think one of the major contributors to how Docker just really caught fire. It's again, building on these understandings and this utility, trending from an innovation to becoming more of a utility.
The Borg Diaspora
Then I also think it's worth shouting out to this notion of these Borg inspired container schedulers. It's like Cloud Foundry, Mesosphere, and Kubernetes, and to some degree, Nomad, are all inspired by the Borg. This is a very similar control loop to what we saw before with the previous generation of tools. Instead of being at the node level, now you're at a service level where the nodes are actually resources inside of that control loop. You say how many nodes there should be, and that makes it.
Building
Now it's Kubernetes. That's where we live now. There's all these other things that I think you have to add to the conversation around infrastructure as code, DevOps, continuous delivery, whatever. Those buzzwords and how they fit together in your head. There's these other things where you write code that's infrastructure-y, and it delivers these artifacts that then get deployed. I think you have to shout out to Cruise Control as this beginning, and then everyone started using Jenkins, but everyone wishes they had something better. Still Jenkins works pretty well, so what are you going to do?
Data
Then there's also these data threads. If you think about what Hadoop brought to the table, particularly when you talk about YARN and some of the other stuff, there's really this infrastructure-y process management almost platform as a service aspect to that, that really changed the game. I think each of these NoSQL evolutions getting into Kafka, and some of the newer cloud native databases, have an aspect of the infrastructure as code that allows you to manage the lifecycle of those in a meaningful way, and keep the promises that you want to about reliability. Then it's like, if we're going to talk about data, then where are we going to stop? Then you get to the Cloud Native Computing Foundation, and there's just so much. Each one of these is building on that abstraction that came before it. It's like service meshes, and in some sense, are very similar to configuration management for the routing. I also think it's interesting to point out that these are patterns. Every pattern that's in the Cloud Native Computing Foundation, you saw in the cloud natives, is just like they all built the same patterns as on their own. Now you're seeing them come into this pool, this shared common resource. Sometimes these innovations, you get a new genesis and it changes everything above it, from there on out.
Millions of Lines of BCL
This is only the world we can see. There's all this stuff we can't see. There's the people that work at Google, or at Amazon, that work behind this veil, or a secret society, where they see things the rest of us don't necessarily see. They're solving problems that we don't necessarily have. As a consequence of that, there's this whole world of infrastructure of code that exists in those organizations that we don't necessarily have access to. I remember a time 2008, 2009, where if you tried to talk to someone that worked at Google about Borg, they stopped talking to you. That's changed now that the Borg paper is out, and Kubernetes is the thing they've open sourced. It's an effort to bring that mindshare and thought leadership to. That was a period where they thought the Borg was their secret weapon, and they won't talk to you about it.
Serverless Doesn't Serverless Itself
There's similar things with Amazon. They have their framework. They have their platforms. They have all these services. They're not necessarily sharing, or we don't necessarily know all the tools that they have internally. I know some a little bit but not as much as I would be able to recreate some of these things.
What about Runtimes and Frameworks?
Then also, you have to think about the runtimes and the frameworks that are above that. This is me tweeting, "Any sufficiently complicated microservice deployment contains an ad hoc, informally-specified, bug-ridden implementation of half of Erlang." There's more language frameworks we could go into, but each of these have some infrastructure-y piece to them. It's like Rails migrations and Erlang does a bunch of things to keep promises with the runtime about reliability. Then we have what I would consider an attempt to bring lifecycle management to these things that we're managing, installing the architectures. Lots of these things I consider almost like added on to these things that weren't designed to be cloud native to try to make them be cloud native, so that they can have that lifecycle. Whatever the sysadmin would do, we try to encode in these operators or in Habitat, or whatever. I feel like, hopefully, we can get past that at some point. I wish we had these abstractions that we could just specify the promises we wanted to keep around scalability and latency and just declare them, and then the underlying infrastructure would be hidden from us, but it would keep all those promises. That'd be awesome.
Conclusion
Code that works but doesn't reveal its underlying mechanisms and intent is one broken dependency away from being useless. Communicate with other humans, or at your peril, you won't do that. You can be fluent in any of this stuff in six months if you need to. Don't get too stressed out that you don't know all this stuff. It's all there for you. This is just stuff, in my search, I thought was interesting, so Dapr, SWIM, RSocket, and WebAssembly are these interesting pieces to build these event-driven architectures. Slightly different paradigms from each other, but very interesting. Then there's this infrastructure as a code movement around managing models and a lot of the machine learning stuff. Then Dark to me is like cloud native Smalltalk. It's like this integrated environment, basically, the IDE and the runtime for the specific paths. Who knows how these will evolve? I thought it was interesting. What comes next? I don't know either. I'm betting long on computers and humans.
See more presentations with transcripts