Cloud computing promises to provide virtually unlimited processing resources on-demand, the scalability businesses have always been looking for and reduced costs by having the option to pay only for what one uses. In this virtual panel, InfoQ wants to find out from leading cloud experts what are the benefits brought by cloud computing as well as the constraints in using them, what is better to use, a public or a private cloud, is the cloud interoperability needed, what is the difference between providing infrastructure or a platform, and how can a client enforce regulatory compliance.
The panelists who have answered to our questions are:
Jerry Cuomo, VP and CTO for WebSphere, IBM
David Linthicum, Founder of Blue Mountain Labs.
Geva Perry, General Manager Cloud Computing, GigaSpaces Technologies
Jamin Spitzer, Director of Platform Strategy in Microsoft’s Developer & Partner Evangelism Group
What does cloud computing bring to the industry?
David: Based on the old timesharing model, cloud computing is an approach to computing that’s much more sharable and thus more cost effective and efficient than traditional approaches. Through economies of scale, industry will have access to resources they previously could not afford, including analytical services, enterprise applications, and the ability to leverage infrastructure services as they need them, when they need them. Moreover, there is a network effect benefit, and the ability to leverage Web-bound resources, such as social networking much more easily than integrating them with on premise resources. This is an old style of computing with a new set of technologies in the marketplace, and will provide an opportunity for industry to reinvent their computing infrastructure and enterprise architectures, becoming much more cost effective.
Geva: Cloud computing takes the industry forward in two ways. One is tactical and has to do with increased efficiency and improved economics. The other is more strategic, as it empowers developers and business users, enabling rapid and effective innovation.
1) Increased efficiency and better economics. Cloud computing takes advantage of economies of scale. By allowing many users to share the same IT infrastructure, the cloud provider can achieve much higher utilization rates normally seen in typical dedicated environments. Also, as more companies "cloud-source" their IT, cloud providers become even further specialized and efficient in running massive-scale data centers, while their customers can focus on their core business.
2) Rapid innovation. Cloud computing speeds up individuals' and organizations' ability to innovate and shortens the time-to-market for new products and services. It does so by removing the need for large upfront IT investments, by streamlining and automating processes (such as server provisioning or moving from staging to production environments) and by empowering both developers and business users with direct, self-service access to the IT resources they need to be more productive.
Jamin: The next wave of computing – a combination of centralized computing resources for cloud computing and an increasing edge capacity from a higher number of more powerful devices, characterized as the ‘cheap revolution’ - will allow users and companies the convenience and choice of remote computing resources alongside personalized device experiences. The cloud offers users, organizations and developers a choice: the opportunity to leverage a highly efficient and massively scalable technology infrastructure platform to create and access application experiences that augment existing investments and traditional deployment options (i.e. corporate datacenters, PCs, etc). We should think of the cloud as an extension of existing computing platforms. Eventually, a well-defined set of criteria will emerge that will help organizations understand when/how they will rebalance across this lengthened continuum.
Jerry: We really see cloud computing as a model for enabling the industry to work smarter. A perfect storm of events is happening to enable the alignment of a business model (e.g., pay per sip) with evolving technology (e.g., virtualization) and standards/architecture (e.g., Web and SOA) to produce a computing outcome that is especially attractive, given the current downturn in economy. Hence, its appeal has something for everyone. Business need not pay up-front and/or can outsource parts of their IT operation – allowing them to spend more precious time on their core business. IT can focus on scaling their infrastructure based on application demand (the days of grossly underutilized systems are behind us). Software developers can help themselves without waiting for IT to provision systems and acquire the right software.
What are the practical constraints that a company should keep in mind when adopting the Cloud Computing architecture?
Jerry: When you think about your cloud architecture, we suggest you think about it using a services oriented approach. Given that a successful service-oriented architecture starts with some business objectives, it’s best you have those straight first. Reducing labor and energy and improving time-to-value are the typical business motivators. Adopting a cloud, with their low barriers to entry, allow some business (or departments therein) a chance to play, where using a traditional model would be a non-starter. I recently blogged about our cloud architecture and how we like to break the cloud into a set of services layers – each providing unique value to an organization. The services layers include Infrastructure, Platform, and Application Services (while there are other layers – we usually talk about these three the most). Pick the right cloud service for the right job, and don’t be afraid to create one yourself. Not all clouds are created equal – this leads to the questions about private clouds versus public clouds (and hybrid clouds). The bulk of our customers are concerned about security and isolation of their applications and data – so most of our customers start with (private) clouds behind their firewall.
David: Performance and security come to mind first. Cloud providers have some ways to go before we’ll place state secrets out there, but based on the fast-paced evolution, I think “good enough” security systems are, and will be available. Performance can be an issue typically due to network and system latency, but that issue will vary greatly from cloud provider to cloud provider. Another constraint is interoperability, but we’ll cover that next. Companies should understand that not all applications are right for cloud computing.
Geva: There are several issues that should be carefully examined such as security and portability, but the one I want to focus on is scalability. Some of the assumptions and best practices taken for granted in a dedicated, static environment no longer hold true in cloud environments. Or even if they work, they don't let you truly take advantage of the power of a cloud such as Amazon EC2.
For example, we talk about the fact that with cloud computing you can scale up and down on-demand and only pay for what you use. That's great, but the architecture of most applications doesn't allow them to easily scale across many servers when needed and then shrink back when no longer necessary -- and do that within minutes. Increasingly, there are best practices and products that can address this need. I've been working with several companies in evaluating the options to do exactly this.
For example, HighScalability.com reported on how my friends at Rocketier were able to build a system that runs on 10 commodity servers and can handle 1 billion events per day. This system can also grow and shrink on demand and can therefore take advantage of a pay-per-use system such as EC2. It does so by leveraging a partitioned in-memory data grid as the system of record. This is a very different approach from the centralized database we are most familiar with.
Jamin: At the individual application layer, there are a set of considerations a company should make about each application it is considering for the cloud: what are the economics associated with deploying and running a given workload? What are the regulatory security and privacy requirements for the application and its data? What kind of SLA do you need? How much customization and configurability do you need?
Businesses focus not only on the needs of the company and its employees, but increasingly care about the technology needs of its consumers and partners. By delivering integrated functionality across the enterprise and the web that can be accessed via a wide selection of devices, businesses can create efficient workforces, more loyal customers and more efficient supply chains. With the addition of the cloud as a deployment option, the flexibility of deploying an application on-premises or in the cloud (or both) allows companies to rethink what they can do to advance their business interests with flexibility, usability, security and richness. This approach frees the company’s IT personnel to deliver the functional value they support first and the technology deployment of that functionality as a secondary consideration. Companies should think about application scenarios that both bring their existing assets forward and create entirely new classes of applications.
What’s your take on vendor lock-in and the need for interoperability between cloud platforms?
Jamin: Interoperability between clouds is only part of the conversation. Companies with decades of IT investment need to bridge between their existing environment to new cloud environments with robust flexibility so that applications that reside on-premises and new applications in the cloud can interoperate in ways that move the business forward and do not require new data integration investments. In many ways for most companies, the interoperability in the cloud becomes an extension of the age-old conversations about interoperability and portability on-premises.
Interoperability in the cloud will be vital. Standards will emerge that dictate interoperable terms for cloud platforms that will enable cloud-cloud and cloud-corporate datacenter interoperability.
Microsoft’s Azure Services Platform is as an open and flexible platform that is defined by web addressability, SOAP, XML, and REST. The goal of this approach is to ensure an extensible programming model so that individual services could be used in conjunction with applications and infrastructure that ran on both Microsoft and non-Microsoft stacks. More detail can be found here.
Jerry: I wish good luck to any vendor that attempts to bring a lock-in strategy to the table. In this day and age, our customers demand open software and systems – and the choice and interoperability it brings. Now, there is a time for a “settling in period” for some of the emerging standards to mature. However, the usual suspects (vendors) are already gathering. Let’s take cloud infrastructure as an example; we in IBM are striving to bring our customers the same benefits as when we rallied with the industry to bring the world Java-based middleware. The allure of write-once and run-anywhere is still a powerful thought. Like with Java, we are now striving to allow our customers to virtualize their infrastructure once, and dispense it anywhere (cloud and/or hyper-visor). There are several standards on the brink of bringing our customers this level of flexibility – the Open Virtual Format (OVF) is an important standard that will help enable this behavior. In fact, at IMPACT 2009, we announced an option to purchase WebSphere Application Server as a binary, pre-installed, pre-configured (including OS), virtual image (using the OVF standard). Customers who buy (or upgrade) to this option, never have to install WebSphere again (no wise cracks please :-). It just needs to be copied to OVF savvy hyper-visor – and you’re up and running.
David: It’s a huge issue. As the cloud providers have built their platform, they are largely based on proprietary architectures, APIs, resources, and languages. Thus, once you’ve built an application on a cloud platform, generally speaking, it’s difficult to move it to another on-premise or another cloud platform. There are initiatives underway to create standards here to support portability, but we are a bit far out from a group of de facto standards for the cloud computing space.
Geva: I wrote a blog post entitled Beware Premature Elaboration (of Cloud Standards) in which I discussed this topic. As I wrote there, eventually achieving interoperability in cloud computing, and even formal standards, is critical for the mainstream adoption of cloud computing. However, at this point is too early and we need to be careful not to rush into it. We do not yet have a deep enough understanding of the challenges that the standards will need to address.
In the meantime, there are a number of interesting developments to track. GoGrid have open sourced their APIs and are making an effort to have other vendors adopt them. There is an open source project called EUCALYPTUS which has implemented some of the Amazon EC2 and S3 APIs. Other vendors such as Enomaly also offer open source cloud software and there is a Cloud Computing Interoperability Forum. Over time, the market will vote with its feet. De facto standards will emerge and eventually formal ones.
Why would someone choose to use private clouds instead of public ones?
Geva: Very large companies and organizations have already made huge investments in IT infrastructure, and many of them posses deep expertise in running an efficient data center suited for the specific needs of their business and industry. Why did Amazon decide to get into the Infrastructure-as-a-Service business? Because they realized they are very good at running an extremely efficient web infrastructure. Well, there are plenty of other companies out there who have such expertise (and one that is relevant to their particular business or industry). They may also have specific concerns about regulatory compliance (such as HIPPA compliance in the healthcare business), which prevents them from using a public cloud, or they may have stringent SLAs which none of the cloud providers currently offer.
But perhaps more interesting is the question of what is the difference between a private cloud and a run-of-the-mill data center? First point to remember here is that "private cloud" does not necessarily mean that it is run in a company-owned on-premise data center. There are techniques and technologies out there to create virtual private clouds in public environments. CohesiveFT's VPN-Cubed is a good example of one product that can help with this.
But even running a private, internal (on-premise) cloud may make sense for some large companies. These companies can adopt cloud principles such as a multi-tenant architecture, virtualization, automation and self-service to achieve three benefits: 1) Increased efficiency in hardware utilization, 2) increased efficiency in IT operations, and 3) Rapid innovation cycles.
Jerry: Many of our customers, in IBM, are excited about the prospects of private clouds. In fact, our cloud strategy starts with the thought of “Rainmaking”. Which is a term I apply to communicate the thought of enabling our customers (with products and services) to “seed” clouds (privately) in their enterprise, and where it makes sense utilize the service of public clouds. Our customers are building private clouds today – and our primary focus is to assist them in creating, automating, optimizing and managing those clouds. Many of them go the private route because they are concerned about security (of their applications and data) and already have cost sunken into infrastructure and labor that they want to utilize. We see customers building private clouds to take on many interesting tasks. Test and Development clouds are becoming very popular. In fact, we now use a private test cloud (using our newly introduced WebSphere Cloudburst technology) to do product testing of WebSphere Application Server. This gives us a way to share resources, precisely produce test environments (in a secure and repeatable fashion) and to reduce the labor of operating daily setups and teardowns.
Jamin: If the question of private versus public is one of on-premises user managed versus vendor-hosted, then the considerations of the practical constraints discussion above apply. The issue is primarily about control and companies need to weigh many of the same considerations they have in the past when they implemented/customized their infrastructure, platforms and applications.
David: When security, performance, and control is an issue. Thus, you can build shareable infrastructure within the firewall, leveraging most of the benefits of cloud computing including the ability to share resources effectively, and making them on-demand. This will be the largest growth area in cloud computing, considering that most enterprises won’t be willing to give up control of their core IT infrastructure, at least initially.
Some cloud vendors offer infrastructure (Amazon) while others offer platforms (Google), how should a typical application architect choose?
David: It depends entirely on the needs the enterprise architecture and or application. While Amazon allows you to consume infrastructure resources by type of resources, storage, database, etc., Google is offering a complete platform for application development and deployment. Thus, you really need to understand your requirements first, define the business problems you’re looking to solve, and then select the appropriate solution, on premise, cloud delivered, and all points in-between.
Geva: In most cases, to be able to offer a platform-as-a-service, such platform providers need to significantly limit the use case and the stack of technologies employed. In exchange for this limitation, the benefit is typically extreme ease in development, deployment and run-time management. If this limited use case and stack fits the need of the application in question then it is a good fit. It is important to note that these platforms vary widely. Google limits it to Python, a specific data model, threading model and so on. An application written for Google App Engine may be ported to other platforms with relatively little changes. Force.com is a PaaS that is downright proprietary, including its own programming language, Apax. Your app will not be able to run anywhere else without a complete re-write. On the other end is a PaaS such as Heroku for RoR. Although you need to write some elements of your application in a certain way in order for it to run on Heroku's platform, these are merely "best practices" and in no way lock your application in to Heroku.
An infrastructure-as-a-service such as Amazon Web Services or GoGrid provides raw IT resources such as compute capacity, memory and storage. It offers much more flexibility, but requires more work and administration. It does not provide higher level services such as out--of-the-box scalability (including auto-scaling) and fault-tolerance. However, as time goes on, IaaS providers are beginning to offer additional higher level services and the lines are blurring. In addition, there are some third-party providers such as RightScale and GigaSpaces that help close the gaps between IaaS and PaaS offerings.
Jerry: Infrastructure services excel at allowing a user to run their existing applications and middleware in a cloud. Most infrastructure service providers enable generic infrastructure support including operating system and perhaps a basic middleware stack. You provide the rest - including application and “know-how” of what makes the application tick (scale, secure, perform). For example, in WebSphere-land, our infrastructure services attempt to bake-in the “know-how” based on our 10 years of experience of helping customers with WebSphere deployments. We introduce the notion of Patterns, which are virtualized deployments that factor in best practices in security, high availability and performance. We are also now starting to offer our software images within public cloud providers, like Amazon, giving our customers a very low barrier to enter into using our middleware (for development, test and beyond).
With Platform services – the infrastructure is “magically” provisioned – and your focus is on the application or services in question. Many cloud platforms have programming models that are specific to the cloud in question (there goes that lock-in thought again :-) ) which makes the application more predictable, thereby allowing the platform to more automatically scale, secure and perform. For new applications this is fine; however, moving an existing application often requires the developer to re-write their application. These platforms are usually available within public clouds –that, along with the overhauling of the application in question - make it attractive to some, and unattractive to others. There is clearly an opportunity for the Java vendors to establish a Java "profile" for the cloud and give our customers some portability across cloud platforms (both public and private).
There are also hybrids models that I think are quite interesting. For example, at IMPACT 2009, we introduced BPM BlueWorks, which is a hosted offering for business leaders. The BlueWorks application provides a portal for business professional to learn, share and collaborate with others in creating business strategy and process. Once the business asset is created it can be exported into an on premise cloud infrastructure (perhaps imported as a standard BPMN 2.0 document). Thereby creating a model where your assets are developed using a (public) platform service and run within a private infrastructure service. Creating a “secure tunnel” between a public and private cloud is key to creating these hybrids. More on that in a second
Jamin: For architects that are looking to move existing applications, they may find limited benefit relative to the necessary changes required to move applications to a cloud. In some cases, the benefit could be immediate and some users are actively reaping the benefits of outsources cloud instances. However, other users have reported that the cost of maintaining legacy applications in virtualized infrastructure clouds actually costs the organization more than had they maintained them in their private datacenters.
For architects that are designing net-new application scenarios or those that had the foresight to build applications that can take advantage of fabric-based cloud platforms, writing or porting an application to a scale-out cloud like Microsoft’s Windows Azure can provide greater efficiency than instance-based infrastructure, be it cloud-based or corporate datacenter-based. But there is no single formula to determine the best results. Architects need to investigate their options and make the right choices based on need, existing application architecture, available development skills and other factors.
How can a customer enforce regulatory compliance?
Jamin: Customers with regulatory compliance conditions must work with their cloud provider to ensure that all data retention and privacy rules (both legal and private contracts) are enforced when the workload is outsourced. Contracts with cloud providers must contain not only reliability SLAs but compliance SLAs that ensure that the customer’s data is being handled legally and managed in geographies that best suit the customer need.
David: There are two layers here, technology and procedural. At the technology level, make sure to create the right amount of governance for the systems, including access to core resources, audit paths, and other things that the regulations may require. Have yourself audited by an outside agency, to insure that everything is as it should be. At the procedural level, make sure that all legal documents exist, and that things are documented along the way.
Geva: In some cases, there is no difference in maintaining regulatory compliance in a cloud environment than in a traditional hosting, collocation or dedicated on premise data center. In some aspects of compliance, the customer is completely dependent on the cloud provider and must verify in advance what steps the vendor is taking to ensure compliance (Amazon, for example, published a white paper on its security practices). There are some people who already becoming specialists in these issues, such as PCI compliance in the cloud.
Jerry: One of the interesting aspects of cloud computing is that the abstraction of the cloud into infrastructure, platform and application services allows for points of control to be inserted. IBM is majoring in providing capability at all levels of our cloud architecture to govern the use of the cloud. Customers can use this ability to gain an intimate understanding of how aspects of their systems are being used and also use the cloud as a point of control and enforcement in accordance to their polices. For example, our WebSphere Cloudburst product produces detailed reports of who is using the cloud and how they are using it. Administrators can use this data to generate custom reports for charging and/or controls. Another example is we see our customers using our WebSphere DataPower SOA appliances along with our Service Registry to discover services and control access (at a fine grain) to those services both in private and public clouds. DataPower allows the creation a secure tunnel within your private cloud that can extend into the public cloud if need be. Using a security gateway, allows you to both midge threats in your cloud, while providing a point of control for auditing the two-way application service traffic. Whether a customer has real regulatory requirements or not, setting up a two-way (web) application firewall around your cloud is one way to work smarter (and safer) with clouds.
About the panelists
Jerry Cuomo is an IBM Fellow, VP and CTO for the WebSphere brand of products. He is one of the founding fathers of WebSphere and has spent 20 years at IBM splitting his years between IBM Research and Software Group. Jerry is a breakthrough innovator of solutions in the areas of high-performance transactional systems, middleware appliances, enterprise cloud computing and web 2.0 technologies.
David Linthicum (Dave) is an internationally known cloud computing and Service Oriented Architecture (SOA) expert. In his career, Dave has formed or enhanced many of the ideas behind modern distributed computing including EAI, B2B Application Integration, and SOA, approaches and technologies in wide use today. For the last 10 years, Dave has focused on the technology and strategies around cloud computing, including working with several cloud computing startups. Dave’s industry experience includes tenure as CTO and CEO of several successful software companies, and upper-level management positions in Fortune 500 companies.
Dave focuses on best practices and the real business value of cloud computing, as well as the true fit of the technology within the context of enterprise requirements. His expertise lies in his ability to define where the business meets Cloud computing using familiar tools and understandable terminology.
Geva Perry Until recently Geva spent 5 years at GigaSpaces Technologies where he played a variety of executive roles. His latest position was General Manager Cloud Computing. In this role, Geva was responsible for all global go-to-market activities at GigaSpaces related to cloud computing, including strategy and positioning, product marketing and strategic alliances. Prior to joining GigaSpaces, he was COO at SeeRun, a developer of real-time business activity monitoring software. Geva received a Bachelor's degree from Hebrew University in Jerusalem. He holds an MS from the Columbia Graduate School of Journalism and an MBA from Columbia Business School.
Jamin Spitzer is Director of Platform Strategy in Microsoft’s Developer & Partner Evangelism (DPE) organization in Redmond. Prior to joining Microsoft five years ago, he spent a number of years in the business applications space with J.D. Edwards and, later, PeopleSoft.