Thank you for having me, Carlos. OpSource has been in cloud computing for about 6 years now. For most of those 6 years we've been focused really on providing a set of services for software service companies, people providing enterprise applications within the cloud. We do all the back-in services for software from a big shared infrastructure up through higher end services such as performance management, compliance, even things like billing and integration.
OpSource has recently announced the OpSource Cloud, which is our first direct enterprise product. It's a way for enterprise to take advantage of what we think is the best part about the cloud - the flexibility, the APIs, the immediacy, the community that builds around the cloud. But implementing it in a far more enterprise fashion in terms of having better levels of security, performance and enterprise controls, so they can use it more effectively in large scale environments.
For enterprise clients or really for any client, there are 2 big issues that are so basic in cloud computing around security that they just have to be addressed, the first of which we knew a lot about when we first got into this, which is that all the cloud environments today - anything you go and sign up for, get up and run quickly with an API, are all built on one big flat network. When you deploy resources on the cloud, you deploy everything onto this same flat network. You go out and put a web server, it's going to be connected to the Internet through a front end interface, if you go out and put a database server, same connection to the Internet on the same interface directory structure.
Any type of system application servers will all be on one giant flat network. Obviously now, you are creating an environment where every single system sits directly connected to the Internet, whether it needs direct access to the Internet or not, whether you are going to be using it for internal purposes or whether it should be sitting in a more secure zone. If you need compliance, you need to keep your data, for example, you need directory structures off the public Internet or protected by an additional layer of security, you can't do that in the cloud today.
It creates a number of performances issues too, because you are not only sharing that network with all of your systems, you are sharing it with everybody else who sits on that network, too. Any customer who takes up too many network resources can actually impact every other customer who sits on the network. One of the first big needs in cloud security is the ability to build private clouds, private networks within the cloud.
The fact is, today, when we build an architecture, you usually think about what's going to be touching the Internet and what's not and then we connect those things internally, but we don't have it go out and touch the public Internet. With the OpSource Cloud, the first thing you create before you create anything else is your own private network within the cloud, a true layer to VLAN, that is not actually connected to the public Internet.
You are leveraging your cloud resources, but you are not actually connecting to the public network and you do that before put out any servers, any storage. You start dropping systems into that and all of those systems are private, none of them are connected to the public Internet. Once you decide that you have certain systems - maybe you want a web server, an e-mail server directly connected to the public Internet, you expose only those resources that need to be directly connected while things like databases and directory structures and things where you need to keep information private are never connected to the Internet.
If you don't want to connect any of that to the Internet, so you just want to use internal enterprise applications, for internal corporate data, there is no reason to have a web server or an e-mail server. You can actually take that private Internet cross connected either through a side-to-side VPN or through a dedicated circuit back in your corporate enterprise and have none of it ever touch the public Internet. This is still new in terms of how these things work in appliances and this obviously allows for much more secure environment than you can see in a big shared network that's all touching the public Internet.
That's issue number 1, and this one we are really familiar with. Issue number 2 with cloud environments is probably even more fundamental and that is the vast majority of cloud environments today do not have a differentiation in users. They all have single username/password. When you go sign up for Amazon or Rackspace or even VMware with their vCloud Express product now, you actually get one username and password for your entire organization and everybody in that organization has to use the same user name and password to access those resources.
If you have anybody who wants to work in a shared environment, with multiple system administrators or developers in one environment, they all have the same username and password. That means they have root access to every system on that network with that username and password. Beyond having root access, it's even worse than giving everybody the same root password across your architecture, because anybody who can then come in, can actually destroy the environment. 3 clicks and wipe out a PB of data, 50 systems.
There is no control over who can do what within those environments. It's funny, even where you don't have people being malicious about it, they say "I'm going to go in and install malware and destroy the entire systems. The real problem has been it's also an open PO. Once you have a cloud environment, shared username and password, every single user who comes into that environment can spin up as many systems as that environment allows. The general rule of thumb is about 25 a day now.
You are allowing every one of your people in your organization to spin up 25 new servers every day. They can spin up 750 servers in a month. This has been a real issue, I think we talked about this a little earlier. People are getting big bills back because it's one thing when it's your personal credit card, you are developers trying to do something new, but if you are on a corporate invoice and you have people who aren't thinking, they are going to be spinning up resources all the time and not spin them back down.
It was like the first month my daughter had text messages on her phone. After a week, she asked me "How many messages do I get?" I said "200" "Dad, I had 2000 in a week!" I was going to get an 800 page bill back and call them: "Hey, change the billing client on the text messaging!" We are really seeing that around the cloud environments. To be particularly blunt, cloud environments today are less secure than PC was prior to Windows 3.11. We've had separate usernames and passwords forever and beyond separate usernames and passwords, of course we need start thinking about federate authorities, single sign up and how you integrate that with your existing OSS systems.
You want a policy based controls, you don't want every unit not only have separate usernames and passwords, you know, who did what and who spent what. You want to be able to say "This user can only do security issues, can work on the network. This user has the ability to spend money. This is how much money they can spend. This user has the ability to have root access to the systems or an image creator" and do some basic policy based controls if you are really going to start seeing this being integrated for production systems in enterprise environments.
We are thinking about all of these really sophisticated things like single sing up and federate authority time back in the OSS. We realized we are out of the gate we needed to solve even our more elemental problem. We are right out of the gate with OpSource you have the capability to do separate users, departmental roles assigned to users and then basic policy based controls. Some users can only do some security items, some users can only do server deployment, spend money, some users can only do server management, actually go on and configure servers.
Here's when some of the cloud gets really powerful. I think is now we have seen the leverage beyond even OpSource. You'll see that as add additional controls in finer grain - certain departments can only do certain things. But the other great thing about the cloud and OpSource is everything you are doing in the cloud is API accessible, meaning even adding users just like adding systems and adding management just like adding storage is something that you can address through any other application.
It's very easy to tie in that user management into your existing OSS systems, HR systems, directory structures and you can actually set up a rule within your OSS system that says "Here, assign this user, give him these sets of permissions and controls." When I make a change in my OSS, when I hire a new employee, when I fire an employee, when it's a new job and have that automatically populate back into the OpSource Cloud.
You can take what's existing in the system today and even build additional rules in clients or have it go in line with the rules in the clients you already have in place using the tools and systems you are already managing that with and have that be just an extensive extension of those applications. The cloud is about bringing infrastructure through API to applications. Now you can even bring control of that infrastructure and users through those same APIs back into the application.
Absolutely, on both the items that we discussed. First, you have the ability to have systems that aren't public. I don't care how big your enterprise or your group is, you shouldn't have every server sitting on the public Internet. You are going to need database servers, you are going to want traditional multi tiered architectures and anything that isn't a publicly facing side should not be exposed to the Internet.
You are also going to want to be doing things like custom firewall access control list and the like, so for even a small customer to go out and have to be able to configure a firewall or set up an open source system with OpSource it's great, because you get this private network. You actually get to control that private network and you can manage a custom firewall configuration. In the most cloud environments to be on a firewall, you have to go deploy an open source firewall, configure that, then figure how to send traffic through that and have that sent back to your servers. In OpSource you don't need to worry about that.
Even for smalls customers who don't have, side-to-side VPN capabilities or don't want to run a private circuit, the OpSource Cloud is all managed through a client-to-side VPN with single sign up with your administrator access . So, each one of your users, who you assign and give them passwords, will get a CISCO client-to-side VPN connection downloaded onto their machine, saying user name and password they use to manage the cloud and will only access that cloud through that VPN connection they manage or systems over VPN connection.
You were not having to set any of that up, as you would today. Immediately everything sets up as a private VPN, for just you and you don't have to build that out from the network side. From the user's side, things like who can security rules and configurations, who can add new servers, who can manage the servers you have today are built in policy based controls that you can access through the UI. You don't have to use the API, you don't have to tie that bag in. You can manage those items right through the UI and it comes as part of being a customer of ours.
This is where I think it's been very difficult for a lot of the initial cloud providers - things like compliance, because they haven't had to deal with enterprise customers in the past and the people who understand cloud really well tend to be people coming from the public cloud side. They are more consumer type operations, they understand how to make things immediately available, they understand credit card billing and they have a hard time understanding how to get compliant from an enterprise fashion.
That's why cloud is very new, so we have to go through our 6 month in a year proving all of our processes to get fully compliant in all those areas, but our existing infrastructure and stuff we do for sciences - PCI, SAS, European Safe Harbor. We have hypo compliance for a number of our customers. We very much understand what you need to do for compliance. Part of it has to be the way you structure the infrastructure. You can't get compliant if you have all of your data sitting on the public Internet. There are actual structural issues with shared networks for getting compliant environments. You have to address the technology issues between your sharing you have VPNs for access, then you have things sitting off the public Internet, but you also have to address the process issues associated with compliance.
When you get compliant is not just saying that "Hey, are the servers up and running?" They may be up and running, but you don't have any way of taking back any information from a customer. To be compliant, for example, you have to have a process in place that says "If there is a problem, how does that get escalated through the organization? When it gets escalated, what are the steps you go through to resolve that in a timely fashion? Do you actually follow those steps and do you get audited on those steps?" I think where you talk about this kind of recurring basis.
You are taking OpSource Cloud, we have a process we've been using for SAS to keep SAS PCI. Applied to the OpSource Cloud, but now we have this ongoing we need to actually get audited on this and what SAS 70, which is the one that's usually used for SOX compliance in the basic elements of compliance, that's a yearly audit, PCI compliant is a yearly audit, European Safe Harbor compliance is another yearly audit.
For specific audits we have to get into each customer's environment. They also tends to be yearly audits. If you go out to your cloud provider and you are going to want to ask "Are you compliant? Are you working to get compliant? Will you share with me the results of your audits in those environments?"
They should be pretty open about showing you "Here is what our process and procedures are", and then we are getting checked by an outside organization to see whether we actually follow those processes and procedures to ensure the security of your data and the veridicity of your process to make sure the process works the right way. Because otherwise, having the systems run all day but not having any way of being able to escalate a problem isn't going to make sure that you are compliant in the enterprise.
It's interesting. We hear a lot of questions about SLAs in the cloud and what are the Service Level Agreements. I always remind people, SLAs are not products and they are not services. SLAs are a way of you committing to a certain level of performance for your infrastructure. People don't want an SLA, they want the performance. No one wants the money back, they want it to work the way it was supposed to, otherwise they wouldn't have bothered to begin with.
When customers are asking about SLAs, what they are asking for really is "How are you going to guarantee that I have the performance that I need?" The cloud has actually been pretty good in certain areas there. I think because we can do now with the way virtualization is, and n+1 redundancy or even n+n redundancy in certain environments allows you much higher levels of availability than traditional physical iron can get to.
This is one of the things that makes the cloud so valuable - you are spreading out these resources against multiple customers and you can make it transparent. If there is any physical problem, with that customer, you never even see that. As a result, what we have seen, and I think one of the areas of cloud is a little bit agreed for, and it's not really true, is that the actual instance availability, virtual machine availability, storage availability has been pretty good in the cloud. I would argue that's better than traditional manage hosting because of the way you can do n+1 with virtualization or traditional enterprise cloud structures, because they don't have that breath of system across to make sure you have that availability.
The SLAs on that are pretty good on instance availability. The problem on SLAs for the cloud is that instance availability is only one part of ensuring that what you are running in the cloud works the way it's supposed to. The big issue of today is latency, that often today you have these instances that are available, but because of everybody is sitting on a big shared network, if any one customer starts really over using that network or too many people subscribe, the machines and the networks between the machines will slow down to a crawl.
And you don't know whether you are going to get something back in a sub millisecond time or in a couple of millisecond time. That issue around performance really needs to be addressed. We don't know, I'll be very honest to you, we are in beta for our customers, how we're going to SLA that. We don't know what SLA they want to show that we have high speed network performance or high instance network performance, so that we're not oversubscribing for example the virtualization farms. We do know what customers are asking for.
This is an evolving area with the customers. You are figuring out "How can I give you SLA to show you that I guarantee that your systems' performance is going to be good and the latency between the systems is going to be good?" The other question gets back to this item of process again. I think we talked about it a little earlier. A lot of cloud environments and still many today weren't designed with the idea of live support. I love community support. Community support is the best way for configuration and set up! I never call in if I want to figure out how to do something.
I go onto the message boards, I go back and talk to other people using it and that usually tells me how I want to most effectively use it. But when there is an outage and when there is a problem, you need to be able to escalate that up. We've seen instances out there where everything worked the way it was supposed to from an SLA perspective, but for example, we have a major outage and you can't get your e-mail on any cloud servers for 36 hours. Because there was no live support or that kind of quality escalation procedures from a compliance side, that was really causing a problem.
Here you have an SLA, where there is no way to actually escalate a problem within the organization. Everything worked, no SLA violation, but you couldn't send an e-mail for 36 hours. I don't think an end user in enterprise would consider that to be something that they should be continuing to pay for. If you can't send e-mails on those systems that should be e-mail systems, that should be an outage and those items come in and out only how do we SLA application availability, not just infrastructure availability, but how do you know those problems are occurring and how do you escalate those into the organization and with that you really need more than just someone sitting on a message board or getting an e-mail.
You need live customer support that has escalation procedures that allow you to take things "Hey, if I get one customer saying they can't get email, that may be a level 3 escalation that has to be responded to within 2 hours". But if one customer can't get to anything, that's a level 1 escalation. If 100 customers can't get email because I got that back, that becomes a level 1 escalation that has to be responded to in 10 or 15 minutes. These are the types of processes and systems that need to go into place, the type of support that needs to happen if their cloud is going to get into the enterprise.
Community support is critical, but when people are spending enterprise dollars on enterprise environments, they want to know when there is an outage or a problem that there is someone who is going to pick up the phone and have that immediate reaction to what their problem is and not "Hey, I sent an email, wait back or check the message boards to see what's going on", but get a one-to-one response, ideally even an outbound call to them saying "We're experiencing an issue, here is what's going on". So you have that human connection and that sense of urgency to make those things happen. Our support is more than just showing you how to use it, it's also responding to problems as they arise, so you can respond more quickly.
It's a little harder than traditional hosting systems are and the reason why is often the way people buy cloud services today. Cloud has been tremendously and really breaking down the cost of everything. Every little cost is separate and people love that. You got OpSource and a private network which is 20 cents/hour, a CPU hour is 4 cents/hour, a RAM hour is 2,5 cents/hour. Each traditional sub administrator , which give them a separate VPN is 7 tenth of a cent/hour and each gigabyte storage is 3 one hundred of a penny an hour. You are really getting down into these super micro - it's not like Chinese menus, like buying 2 chi menus.
The question is how do you SLA when people are buying all these little micro items? Do you SLA storage? It's 3 one hundred of a penny for an hour for storage. Separately, then, you SLA the private network. Do you SLA the sub administrator VPN connection? Separately then you do the RAM? In the past, SLAs from service providers were "Here is a managed server, I'll SLA the whole sever. I'm not SLA-ing the storage different than the CPU, but different than the network connection to the server. I'm SLA-ing the servers up and running. People buy things in different ways and it does make that a little more difficult. I think a lot of that is customers figuring out what they want to see from SLAs.
Of course, in the end SLAs, as I said before, is just a way to show that you are getting the things work, that we agree that we are committed to things working. The first problem of course, is to solve problems if they are not working - what are the problems with latency, what are the problems with instance availability. The same is how do we figure out what those right SLAs are when people are buying. Do you really want a 3 one hundred of a penny/hour SLA? Do you want an SLA that bundles those together? As long as you see these micro payments, is the preferred cloud buying, which it definitely is, you are going to have people asking, we are still going to have to figure out what the appropriate SLAs are for those environments.
This is obviously a critical issue for us. OpSource has in the SaaS world one of the 5,000 systems today, so we have a pretty good idea of running a good size environment - not a huge one, but a good size environment. Our customer base has been primarily US and Europe is just getting into SaaS and now we are looking at cloud and even without doing any sales of market, we are seeing about 35% of our clients sign up some cloud coming from international customers. People from places we kind of expected - Japan and The British Isles have been signing-, but also places we didn't expect South Africa, Argentina, Switzerland, Indonesia.
There is this huge breath of people coming from around the world and using cloud. Our decision, when we deployed the OpSource Cloud, was that for us to bring this technology effectively to as many customers as we could, we were going to need to partner up with somebody who had that kind of worldwide reach. OpSource Cloud is built actually on NTT, which is which is the world's second largest telecom. We're leveraging their network, their data centers or facilities, even their infrastructure that we put our software and delivery on top of, to make sure we have that kind of very worldwide presence.
What's been fascinating was we started that in our first deployment in there just in Northern Virginia and the assumption was that we were going to need to go in the international pretty quickly if we wanted to be able to effectively service a number of locations. Then, we just covered even for Japan, which is half way around the world - I believe it's 12 times ... from Northern Virginia, we're still seeing 25 millisecond latency, which is tremendous.
It's about the speed of light, if you take it straight across country over to Japan and we think a large part of it due to NTT's rich and scope. Even within the existing center we do think that as we expend even more internationally, we are going to want to have the ability people buy local resources for lower latency and also for just breaking up their application in multiple locations, for business availability, for disaster recovery and of course for higher speed access to local systems for people who are international.
I think you get back to our underlying philosophy on cloud and SaaS and I know we talk about cloud as being cloud infrastructure mostly, these days. In our mind, the cloud is anything on the Internet, hence what we used to draw to represent the Internet. It could be probably cloud things like Facebook, Google, it can be SaaS, which are business applications in the cloud, but what we think is driving the cloud into the enterprise is less about people saying "Oh, I can buy things for 4 cents/hour now" and more about a changing in the users who are working in the enterprise.
I always tell a story of when I first started working, 20 plus years ago, coming in to my first job out ofschool and working in an environment where everything sat on minicomputers. Our midsized company had minicomputers, you know DEC VAXes, and I came in and my first job was a training database and I put on a PC. Not because I didn't know what a minicomputer was, but because I was a PC guy. I did PCs in high school and because I liked PCs. We later networked that PC up and put the first Internet network into the company and started running client-server applications, training that we did spread out in the course across the organization. So, about 20 years later, client-server replaced things like VAXes and mainframes, it was the primary form of enterprise computing.
Today we have people honoring the workforce, who've grown up on the web, who've had immediate access to applications their entire life, they never had to wait for anything. They do a search and get going, they have no concept of work data and home data. Any data they want is available any time any place to them. They are used to sharing and collaborating on things. They've grown up sharing and collaborating on everything and now they are coming in a workforce in traditional enterprise applications and deployment models look extremely slow, are very inflexible, have way too many restrictions on their access.
These guys are programmers who grew up in these environments, they expect an API to everything and be able to link up APIs from their various environments. They want to be able to share and collaborate on that. They are beginning to drive the enterprise adoption of cloud environments and SaaS environments because they expect their work applications to work the same way that all the applications have used their entire life have. As these people start becoming decision makers - and we are really talking about their 25s or in their 30s - they start becoming the managers and directors and the CIOs of the organization.
They are not going to walk in to an environment and say "Hey, I don't trust this environment because I don't have my security guys working on it or I don't have my storage guys." They are going to go the exact opposite "What do you mean I have to secure this application? What do you mean I have to build other storage array? No, that's not the way it works! I search for something on the Internet, I program to API, I get everything from the cloud. Your job, service provider, is to make sure all that is that way." That's what's driving cloud adoption - people who are used to using the Internet, they want to use it in business the same way and that's not going to happen tomorrow.
We still have mainframes and minicomputers out there, quite a few of them, but it's going to happen over the next 10-15-20 years. You are going to see more and more of what enterprise is do move to the cloud because that's what users in those enterprises do, how they operate and how they execute. In the mean time, we are still going to figure out how we are going to have these hybrid environments, how are we going to start extending cloud use. That was interesting, that was probably when we went in the beta.
The first thing our customers started asking us was "We love the cloud, we love being able to leverage, we love being able to turn resources up and down, for burst ability or for time a day computing. We love the fact that I don't have to go out and make CAPEX commitments, especially in this economy. But I'm not going to throw away what I already have. I already have a traditional managed hosting environment, I already have a traditional datacenter, there is still price/performance on the cloud -it isn't optimal for every single thing."
If you are running a big giant database, it's still cheaper to buy the server and buy the storage than it is to buy a lot of cloud resources today. You may want to interconnect those environments. A lot of what we're seeing over the next 5 or 10 years is going to be not just how is the cloud going to give me better service to my enterprise, but how can I link the cloud back in to what I'm doing in my corporate data centers, in my hosting environments so I can leverage what I already have, but also be able to expand seamlessly into the cloud and get the best of both worlds.
We're going to see that, whether it's cross connects on the network layer, it's things like the explosion of VMware into the cloud is going to be huge for enterprise adoption, because open source is great, but if your most enterprises have standardized on VMware for virtualization, now that you have things like OpSource and vCloud Express, they are based on the VMware, where you can actually transfer images back and forth seamlessly from your corporate environment onto the cloud and back without having to go through a lot of hoops.
So, I can take advantage of my invested infrastructure in VMware and still leverage out onto the cloud - those types of connections. Finally it's going to get to a kind of data level connection. How do I train the data back and forth between the cloud. How do I do things like single sign on and where is my master record of authority if I'm keeping some data out of the cloud and some data back in? How do I ensure that's secure? We're going to move up past infrastructure and get into more sophisticated application level connecting through these APIs. Someone who's programmed their entire life and writes a meshup like mint.com that connects all these different banking resources and your Facebook and brings back your own application.
You start doing business applications, too, where they can link their behind the firewall data with stuff that they may be sitting out with the SaaS environment plus something they've built as a cloud application all in one area, in one location to your hybrid applications that are actually dealing with data from both behind the firewall and out in the cloud. Managing that is going to be a big part of the future work for both developers and the cloud providers.
I think you are going to see IT already getting there - starting with cloud infrastructure. With that, you already have the IT guys, sys admins like me who never touched a server. You think about this now, that there are people who are becoming literally system administration experts, Linux or Windows or whatever their underlying OS is and deploying systems for an organization, without ever actually physically touching hardware any more. In this that's that first level of abstraction cloud infrastructure brings to the IT's where they are really getting out of the hardware business and just getting into the server business and the systems business.
I think that over time, that will be taking another leyer of abstraction away where even managing systems will become less and less important. The technology that will actually sit within the enterprise when largely cloud environments will be in use will be access points - a laptop or an iPhone and a printer and a network connection. Most other technology won't exist within the corporate environment when you are largely using cloud. I can tell you we're a midsized company and of course we eat our own dog food here and we use cloud for everything, SaaS for everything. I don't think we have 3 servers in the organization.
If we're all on different applications that we're running customer support, CRN, back office financing, we don't have any servers anymore and our IT guy, whose job was setting up servers all the time, now his job is figuring out how to integrate the data from out billing systems back into our sale force system. Using technology like Boomi, which is a cloud integration platform, to deal with data and information, so we can get reporting back so I can see why people are signing up, how they are signing up, what they are spending, is the IT and his job is really not IT now, it's IM - information management.
His job is if we have information coming out of billing, I have information coming from sales, I need to let sales notes coming out of billings and I need to takes what's in the sales and I need to report that all back to Treb, on top of that I have all this information coming from infrastructure, so many people are using and his is figuring out how to bring all of that together. He hasn't actually set up a server in 2 or 3 years. I'm afraid that we have a bunch of old servers lying around that we don't use anymore and that's really the evolving job of the IT - they become information management professionals.
Technology guys get to do servers and there are guys still out there who like it and they end up working with guys like us. Everybody forgets the cloud doesn't eliminate physical infrastructure, it just means that guys who are really big in the EMC arrays, in blades on CISCOs and HP blades work for guys like OpSource.
I will bring up a funny comment - so many times people go "Your system's in the cloud." Everybody does realize the cloud at some point has to come back to physical servers sitting in physical dataset. There has to be some place that those kind of technology guys end up working for guys like us, as opposed to working for enterprises. Enterprise guys tend to be data guys who can manage the data and information flows from processes.
I think so. I think people are going to start, if they become more and more comfortable with the cloud, it's not going to be a separate part of your infrastructure, it's just going to be a transparent extension of your infrastructure. I use an analogy of what we saw happening in the networking world. When I first started doing the service provider side of things, the idea of a private circuit was a private circuit.
A private circuit was a piece of cable that you literally dug up the ground and laid it down and you had one end of that cable sitting in your office, maybe in San Francisco and another one end of that cable sitting in your office in New York and you own that cable and that's what a private circuit was. We started doing shared circuits and frame and the like. With the predominance of TCP/IP and when that became the dominant networking choice with the integration of the Internet, the idea of what made a private circuit has changed. Now everybody starts using Internet circuits.
When you have Internet circuits for connecting to the Internet, if you have a private circuit that you do for your internal networking and those were on 2 separate networks. We invented MPLS, which was a way to ensure privacy and quality servers on top of a standard Internet network. Now, when anybody says "I want a private circuit", what they are really buying is an Internet circuit that has enough technology to ensure high performance and a connection. It's actually running all over these same networks.
When you get an MPLS circuit, which is what people call today a private circuit, it's running on the same networks as the Internet. We will see the same thing happen in cloud and people will say "I want private environment." What's probably going to still be on the Internet, it just couldn't be a private space within the Internet the way the MPLS circuit is. It takes a while for people to be comfortable with that, but it's amazing, if I walk out and say "private circuit" right now, no one considers actually digging up the ground and putting cable into it.
What they mean is "I want an Internet circuit where I have all the controls and quality of server set to ensure it's private for me." Well, you see the same thing out in the cloud, where people in 10 years time will say "I want a private environment", but they don't mean "I want you to build me a whole data center that has only my things in it, but I want you to carve up a part of the cloud that is only for my use and I can assure no one else has access to." It's going to be interesting to watch that transition happen over the next 5-10 years.
Absolutely not. It's funny for all the talk about the cloud, it's basically still low level infrastructure that's being delivered today - so it's compute power, it's network and storage. The vast majority of your operations time is done just setting up the servers with the OS and not actually running the applications that sit on those servers or tuning in the databases that sit on those servers. You probably have to get rid of some operations people, because if all they are doing is unpacking boxes and putting things in the racks, that's not going to be very necessary.
That said, the business of running IT is about managing the data and information in the organization and optimizing that information. Even with the cloud today, there are services and if you put a database into the cloud, you still think to tune that database. As much as everybody would like to say "Hey, we can just about anything, you just use a public accessible database", that's not the way it works. It still needs database experts who can manage a database and the tables, the access, the applications.
I think your operations people will either morph into more data focused people - and we have seen that kind of combination of operations and programming and development all converging in the people who think in terms of data instead of systems and technology, but they don't go away. If you really have people who all they do are cable monkeys or rack and stack guys, which is how I started in this business, then those people will probably end up not being part of the organization, hopefully moving up in the more sophisticated operational world of any organization or perhaps moving up onto something like the OpSource, who still has to deploy all of these things.
13. What are your customers using the OpSource Cloud for these days?
It's still early on, so we are just getting the initial trend. The great thing about cloud is all these matrix that come in because you can see who signs up for what and how and what are they signing up for. But things like what are you using it for, we just ask them. One of the things that comes with OpSource is we call you and one of the first questions is "What are you using it for?" You don't have to tell us but if you do, it helps us figure out how to do better services for you, gives you some idea of what we may be able to do. From an application's perspective, it's been all over the board.
I mean internal applications, we have healthcare, we have e-commerce, with external applications we have something like gaming. We have got to see one application type, for example get over 10% of the customers - that's how widespread it is. We have seen and that has been very interesting though, is how many people who are using the cloud for their customers. Without us doing any sales and marketing towards this, without us projecting it this way, about 35% of our sign ups were by consultants, who were doing projects for other customers and using the cloud.
Some of these we knew about, some of them were guys who come in likeBplats, which is a Japanese system integrated, who wanted to offer a cloud offering to their customer, where they would go in there and have a Bplats type of cloud using OpSource for their end customers and they wanted that environment. Others though, aren't even selling cloud to their customers. They are selling applications, they may be doing web site development or doing small business like a CRM, and what they are doing is just using the OpSource Cloud instead of going and buying an HP server and then setting that for the customer, they just go buy an OpSource Cloud and say "Here is your IP address to go with the application I set up for you or the web site I designed for you."
The one probably interesting observation so far is not how many people are just using this for their own enterprise needs, but how many people who are working with enterprise customers are using it for their customers so they won't have to buy systems or set ups to either deliver either our cloud offering to the customer or offering that has nothing to do with the cloud. Cloud is just a way to enable their business to provide the same service that they've always been providing to customers, whether there is web site design or setting up CRM for small companies.