I did. It was Paris. I mean how can you not enjoy Paris?
Rags: Absolutely. No, I meant after the summit.
Yes, that's right. I took vacation.
Rags: Oh, you were in Paris. Okay. Got it. Got it.
If you're in Paris, you might as well stay in Paris, right? I mean why would you go somewhere else?
Rags's full question: That makes sense. I didn’t know that. Yes, anyway. So what I'm here to talk about, in general, should application developers really care about OpenStack. My first question, with really the spotlight shining on the application developer and a lot of people saying software is eating the world. The developer is really front and center of the API and the app economy, right? But should there really care about infrastructure as a service in general and OpenStack in particular?
So it's a nuanced answer, which is that I think that they need to care but only indirectly. What an application developer continues to need to be focused on is on delivering whatever solution they are to the market and do it in the easiest way possible. And I think that's what drives the requirement for Infrastructures as a Service.
For OpenStack it's that developers want sort of the contract with a set of APIs where they can spin up the resources they need in order to assemble that underlying underpinnings of their application. And even if there have a big abstraction between them and the Infrastructures as a Service and like Platform as a Service so, for example, Cloud Foundry, they there still have to care to a certain degree about the infrastructure. Other people in the hand wave and pretend that you can kind of ignore the infrastructure but the reality is that the application suddenly is having networking problems, you need to know if it's the networking and the application or whether the underlying infrastructure is actually impacting your network.
And so they need to care but I think ultimately what needs to happen is that we still need to manage that contract between the infrastructure operators and the developers in such a way that the developers can be focused on that contract. So that's what happens with Amazon today. If the networking isn’t working well for you, you kill that network or that VPC or that VM and you start another one so that you can go on because you know what the basic parameters are of what you ordered via the API.
Yes. I mean I think the key is when we talk about second platform versus third platform apps, kind of the design characteristic of a third platform application or a cloud native application is that it manages its own resiliency. It manages its own redundancy. It's own data replication. And the way it can do that is by having these guarantees around the APIs, some basic SLAs about what the infrastructure is providing. So it knows that a VM is misbehaving or it's not getting the IO it needs from a storage device or whatever and it can actually proactive that application and proactively make a change.
So it's a little tricky, right? I mean I think a couple of things are happening with OpenStack. First is that it's really capturing all the mindshare. So people see this really evolving into kind of the next generation of data center operating systems or the fundamental new fabric will use kind of a step-up from Linux. If Linux is kind of current per serverfabric, OpenStack is the fabric that allows us to get across the entire data center and how to have a single orchestration and management layer.
And so that's great that can help the developer in many ways but the developer probably is going to care less about the infrastructure as long as it's other way. It's like turning on the water faucet. You just want the water to come out. You don’t necessarily care what's going on with the plumbing behind the wall. So if OpenStack is successful, that's probably where it delivers that experience just like Amazon does today. And then what the application developer needs to be focused on is making sure that they are delivering the requirements of what they need into infrastructure teams so they can continue to innovate and update that layer over time, but then be focused on the platform and above layers where they're delivering their applications.
And the thing is I think that it's really a bit of a challenge for folks that recognizes that developers always want to be focused on just delivering the application and updating it, moving towards the DevOps model, continuous deployment, delivering, and so on but there will always be times when that touches the infrastructure, and so they have to maintain enough awareness of it that they know what's going on and how to respond and how to get help and how to design their application right around those.
Rags's full question: On the opening day keynote at OpenStack Summit, there was a quote from Einstein something along the lines of if you always do what you did, you always get what you got. I think it was referring to the DevOps model or the Waterfall model really, right? If you continue to do things the same way, you're going to get results that you've always seen -- a lot of software projects get cancelled and so on, right? Are IT shops really prepared for the change that will be ushered by cloud computing or collaborative computing? I also heard that you had some interesting roots in that you are a pioneer in automation, on the production shop floor. So do you have any subtle or not so subtle advice to IT managers as it concerns DevOps in general?
Yes, I carried the pager for a long time. You know, I managed systems. I managed storage. I managed networking and so I was in pretty good stead when cloud came back and kind of converged those things. You know, it's a challenge because I think that it's tempting to sort of pretend that things are going to stay the way they have been but that's really not the case. Maybe we can see sort of pretty -- there's pretty significant transformation happening in the enterprise.
I guess the main thing that I would want to say to your average IT administrator is that the world has moved to a model where it's just really critical for the business units and application developers to deliver in days and weeks, not months and years. Historically, what's happened is as a group, everybody within the enterprise is jammed up on a given project because they're trying to remove all the risks. So they're like, okay, we're going to deliver a new application. Let's make sure it's mission critical storage, mission critical networking and there's all of this work. And in aggregate what winds up happening is sort of building these big overrun silos that are complex, hard to maintain, very expensive. If you compare that against going directly in Amazon today, the experience is very different. The developer and the business unit can just get up and go, use credit a card. And if it fails within the first few weeks of their trying, shut it all down.
And so that experience that's sort of reducing the cost of failure, making everything much cheaper, making it fully automated, the DevOps model, really helping the business units and application developers get that is now becoming very important for the IT administrators. So you've got to just get out a mindset of managing all the risks out and getting to more of a mindset of helping create value, reducing the time you market and helping really the speed increase of the internal teams, and that you can get your mindset around you're going to enable them and move faster rather than help them to reduce the risk of delivery. And I think that that gets you kind of where you need to be.
6. Is the term DevOps overhyped?
Probably, a lot of people for a lot of DevOps. But when I sit down and I think about it, I think fundamentally what's there is a recognition that there is sort of a breakdown that's occurred in the enterprise between the objectives of the two teams that are required to make it easy using the application. So the objective of the line of business is to basically get something to market as quickly as possible. The objective in centralized IT teams is to manage down risk. This is what I was just taking about.
The reality is that both of those are very important things, right? But you can't have them in conflict with each other if the line of business comes to the centralized IT teams and says, "Hey, we need we've got this list of requirements," and then these guys come back and they say, "Well, then I'll take 18 months and $10 million," and that's like a no go for these guys. So they're just going to go Amazon. And so even though that's the way we've done things in the past, we have to recognize that we need to lay the line of business go much, much faster. It's going to have consumer experience behind the firewalls and does it outside the firewall. And at the same time, the line of business really needs to understand that the IT team needs some amount of governance in clients and some structure around the way that that's going to occur.
I have seen a lot of enterprises figuring out how to break down those two silos and allow those teams to work together. And for me, DevOps is really figuring that out. And we're in the very beginning days and so you can talk about configuration management, automation and all that, those are all kind of the means to an end. The end is to break down those two silos.
Well, Platform as a Service gets overused. When I think of it, I think of it as a load your code and go system almost, kind of Java Container 2.0, something like Cloud Foundry is really taking the ideas that people have around WebSphere and WebLogic and saying what does that look like if the environment is no longer static and there's an API so you can drive the infrastructure? And then you start having something like Cloud Foundry. I mean that's immediately what I think comes.
Platforms as a Service absolutely has a place, but I think sometimes it gets people want to use it everywhere. And I think what you're going to find is there's sort of like a spectrum and at the very low end very high level of customization and you've got to do something fairly unique. Platform as a service is probably too constraining. You're going to use very little of DevOps techniques. You're going to use Chef, Puppet, Ansible, SaltStack. You're going to use all that configuration management, you may build your own automation frameworks.
You kind of want to build your own custom Platform as a Service basically to manage the platform. And then the next step is more of an enterprise platform as a service system like Cloud Foundry which is kind of more constrained but hides a lot more the complexity. And DevOps has less of a role to play there other than you need the team to coordinate on, making sure that that Platform as a system -- Platform as a Service capabilities and requirements meet, you know, capabilities meet the requirement necessarily. And then the very last step up from that is kind of like Software as a Service so like pre-canned application and SharePoint admin, things like that. You know, the DevOps has very little or no role to play.
I'm not an expert on the Cloud Foundry architecture. I mean my sweet spot is infrastructure as a service. I'm an infrastructure guy. I think the time probably is right generally speaking. In terms of things being missing kind of throughout the entire stack, maybe that's something I can talk to you more effectively. One thing I noticed when I was talking to, for example, the financial services guys is that they're used to environments that are very static and they can say this is the security policy in this environment. And they know that that is the case because it doesn't move around a lot.
As we move towards private cloud whether it's Infrastructure as a Service, Platform as a Service running on top of it, the environments are much more dynamic and are changing and moving all the time. And a lot of what I've seen from a security perspective is the desire to have a security policy that basically follows the app and its data around no matter how it expands and contracts. That is currently missing. That's a hard problem to solve honestly. But people are working on it.
So that is one of the places where I think there is a gap because what I found when I talk to the financial services folks in particular is that they want to move to the cloud. They want the speed of their developers to change dramatically. They wanted DevOps but they've got compliance and regulation orders to come in and expect to see this certain kind of thing. And so they need to bring that kind of forward into cloud so that those guys can come in and put their stamp over even though the environment is changed.
Yes, they do with some caveats. The reality of the situation is the key characteristic of a third platform app is that it's typically going to be dealing with the scale that you haven’t seen before. In the '90s, large app was SAP that was servicing 100,000 people. Today a large app is Facebook that services 1.5 billion. Those are just very, very different scales. So if you're building a mobile application that's going to plug into iOS or Android or you're building a web application, going to plug in the Facebook, the chances that you could have a sudden takeoff are very high and the only way to have some kind of substrate that can scale that is to have design with scale out patterns.
So pretty much most of the platform applications are designed for scale up and not for scale out and that's a challenge. In the past when you try to do scale out and a lot tools weren’t there. These days actually there are a tremendous number of tools to help you design for scale out although that most people are not experts on that. So that scale out pattern, that can be something that's challenging to learn and to bring to your app. However, that being said, one of the things I find very interesting is that pretty much almost any app kind of from 1999 on I like think of is at least a web application has a shared nothing architecture. It's already kind of designed so that many of the layers are designed for scale out.
So think about it, you got a layer of load balancers, a layer of web servers, and they're being load balanced. And then it's when you get to the storage area that the database layer that you start to see more scale out pattern. You've got an active passive kind of pair or something like that. So a lot of those shared nothing architecture kind of platform 2.5 kind of apps can be easily updated to platform 3 by looking at the storage layer and just updating it into more of a scale out pattern.
You could use it for legacy applications but it's a little bit of a force fit. I mean reality is that if we look at sort of the existing virtualization systems, largely VMware, those services, those second platform apps really, really well. In fact, all the updates that VMware are on software defined data center and putting more automation along greater scale and more manageability across large amounts of virtualization systems, that's just all gravy. And I think that it's probably a mistake that we're going to take those second platform apps and then kind of third platform infrastructure. It's more likely that all the net new apps go on, sort of the net new infrastructure model.
Rags: I know that you have been involved with OpenStack right from the inception. You're a part of the foundation. You are on the board. You recently moved to EMC as a result of buyout. How will OpenStack in general and the acquisition in particular kind of help EMC? It's a very open-ended question, I guess.
Yes. So cloudscaling, my previous company that EMC acquired was part of the original OpenStack launch in summer of 2010. And if you look at all the major players in OpenStack today, we're one of the only companies that actually was there at the launch. And we had a number of early wins, getting OpenStack in production in early as January 2011 so only six months after it had launched. And we've been on this journey and now we're on this journey with EMC.
And the thing that I see about EMC that is really, really positive is that EMC gets and understands that there's a set of new kinds of applications and that those applications are going very quickly and that they're very important, that the developers are important, that DevOps model is important. All those things EMC really gets and wants to be there to help the next generation of IT. So that's amazing to me because a lot of sort of classic enterprise vendors would sort of kind of put their head on the sand and pretend that IT is always going to be the same forever except it's never been true historically so it certainly won't be true in the future, right?
I'm one of those guys who's using the internet very early on in the mid-'80s and people used to ask me, "What's your hobby?" And I'd say, "Well, I used to internet." And they'd say, "The what?" Now, I just can't imagine anybody not knowing what the internet is. And that's the level of paradigm shift that we're looking at here. And I really see and believe that EMC, the entire leadership team, gets that the future looks very different then the present and really wants to get there. And I think you can see in the acquisition of cloudscaling that they want to play some bets and take some chances.
That's a tough question. Let me try to take it piece by piece. So interoperability appears to be important to people because they want application portability. That seems to be the main driver. You develop an application in a certain location. And when you do, you make assumptions about the environment that it's working on. A lot of the times those are even assumptions you're not even aware of. Just the environment works a certain way so you design your application and then it's there and it's tested. And then you go and you move it to different environment, things break. Like the classic thing you always see with engineers is like it worked on my laptop.
And so the same is true with clouds and that's why people are looking for interoperability. But interoperability is very, very hard. I remember in the late '90s there was something called IPSec VPNs that people were deploying widely that helped VPNs cross the internet. And what happeneded was that even though IPSec had a very clear RFC standard, all the vendors who had created their IPSec VPN appliances, they all interpreted it a little bit differently and none of the IPSec VPNs worked together. And it took years of work and bakeoffs to basically get the interoperability to work. At the end of the day, it's about testability.
And so right now OpenStack got this amazing kind of end-to-end transactional testing capability called Tempest tests, and that's what they use in their continuous integration system. I think there's a thousand plus tests. Many of these tests test very specific functionality like even Amazon web services functionality and make sure that the EC2 APIs and the functionality in OpenStack that mimics Amazon looks the same.
So I think that if we're going to get to interoperable OpenStack clouds both between different versions of OpenStack but also between OpenStack and other public clouds, we're going to need to create more richness around tests that test the behavior of the various clouds and then use that to basically try to drive defacto standards like this is an AWS hybrid compatible cloud. This is an OpenStack compatible cloud. This is an Azure compatible cloud. None of those is necessarily mutually exclusive. I mean you might be able to have interoperability, compatibility with several different flavors. But that's the way we'll get to interop is extending these testing frameworks and then having standards that you can press a button and you can say that's certified AWS interoperable.
Yes, I call them flavors but, yes, and I try not to -- I only get too much on the weeds on how we do that but the things that if you want to really try to figure it out more as you would look at OpenStack Foundation board initiative called DefCore and then you look at RefStack which is some tools that are around DefCore that are designed for the interoperability testing. So we're kind of down the road of building up a standardized framework for how we would actually figure out whether something is interoperable and then it will become more of just turning the crank to create more tests and deem what does Amazon web services compatibility mean, what does OpenStack mean, what is Azure and so on.
13. [...] Is this a hype that will pass? Is it somewhat relevant to application developers?
Rags's full question: Two things that I heard over and over again in the summit was NFV. I didn’t even know what an NFV stood for, Network Function Virtualization and Containerization. Docker was mentioned all over the place and you could not go wrong if you mentioned Docker in any conversation I guess. Is this a hype that will pass? Is it somewhat relevant to application developers?
So there's two pieces that I will take them separately. NFV is sort of the fancy term that a lot of the carriers and network providers want to use of the virtualization of kind of the layer 4 through 7 network services. So a lot of times when people think about SDN or network as a service, they're thinking about the whole stack. But if you're looking most kind of SDN place today, it's really layer 2, layer 3, maybe layer 4. Once you start thinking about firewalls, load balancing, any of those kinds of things, that's not really SDN so much as what people are calling NFV.
So NFV is about how do we take the network services that we've had built up over the last 20, 30 years and design them the scale out manner that is software only. They can run on any, x86 substrate, whether it's VMs, containers, bare metal and so on. And so that's why there is this desire to make this happen. And just like SDN, it's kind of a little bit slow to emerge. You can expect in a B2B slow to emerge because it's a big nut to crack. I mean you got 30 years in networking, network applicances are not going to turn to software.
So the second piece there is containers. Containers are here to stay. They've been here before. In a way, they predate virtualization. The thing about containers that's important to recognize is that there is a couple of different drivers. So one is that if you look at a lot of third platform applications, they can use an entire x86 box of resources. They don’t have any problems. There's not really much need to virtualize them on a box because they can use all 16 cores and 128 gigs of RAM. Why would you do that? An example might be Hadoop, right? So that's one driver.
Another driver is that if you look at things like Chef and Puppet, the configuration management tools, Salt and Ansilble, they are very, very powerful. They're also very complex. And if you're more of a developer, not as an operator, that can be challenging to use those in your practice. And so part of what something like Docker brings to start the container story is this unified file system that's kind of layered file system that has kind of personalities on it. So what that does is it allows you to have a very simplistic way of doing configuration management and control and kind of redeploy your application very, very lightweight, very, very fast manner. And so you can get a lot of the benefits that you weren't using kind of complex configuration management systems but as a developer, doing a way that would make more sense to you.
There's good purposes for both of those and they're not necessarily mutually exclusive in any way but that's what's kind of driving sort of containers in general. So I think in the future what were going to see is we're going to see people really caring about getting dynamic compute and then it will be type-1 hypervisors, kvm, ESX, and so on. There's going to be containers, Docker, LXC, CoreOS, and others. And then there's going to be bare metal on demand and you're just going to mix and match and use the things that make sense of your workload.
You know, the big thing is not to get too caught up in the terms and going on to the new shiny thing and trying to do stuff. I mean I have a very biased view because I came from the startup world. I worked at like 12 plus startups and startups they have this saying of get shit done. And so what I think is really important for the application developer to keep in mind is that that's just the mantra you should have. It's about delivering the value, delivering the thing that you need on time, and they continue to iterate on it.
Nothing is ever ultimately done but that continuous improvement, continuous cycle of making changes and doing it very rapidly, that's the virtuous thing. In that process, you're going to find the certain tools. You're going to work better for you, it works for you, and you can incorporate those in what you're doing, and the same with processes and workflows and so on. But don’t get too caught up in the alphabet soup or the particular ways that one person or another says. You should do things. Just try to find a way that you can deliver that kind of experience and process to yourself and your team.
Rags: Got it. So thank you again, Randy. And with that, we are off.