At the Interop 2011 conference in Las Vegas, Alistair Croll of Bitcurrent delivered a talk titled “How to Think Like a Cloud.” This session pointed out the differences between a traditional application development approach and a utility computing mindset. Croll focused on the solution deployment, security, performance and overall architectural characteristics of cloud-oriented solutions.
Croll claims that in order to think like a cloud:
- You treat machine as if they’re free. An organization that can instantly acquire and destroy server instances is able to more easily handle a variety of scenarios, says Croll. His example explained how a conventional enterprise application is typically taken offline during maintenance windows whereas a cloud-oriented application can reside on newly launched environments while the original host is receiving attention.
- You realize that Big Data gives clouds something to do. The massive horizontal scale of cloud infrastructure is well suited to activities with variable demand and intensive data processing requirements. An Amazon Web Services case study illustrated Croll’s point. The Washington Post sought access to thousands of pages of PDF content and instead of grinding through twelve days of on-premises image processing time, they shifted the workload to the cloud and completed the task in a single day at the cost of $144.62.
- You know clouds’ underlying architecture and utility model means DR for free. While much has been written lately on the necessity to explicitly design highly available cloud solutions, Croll explained that most application data stored in the cloud is automatically replicated and stored in a different physical location. For example, the Amazon Relational Database Service (RDS) provides automatic backup and recovery with virtually no data loss. However, consumers of cloud services must be diligent to confirm what data recovery characteristics are available and not blindly assume that their application data will survive infrastructure or system disasters.
- Sometimes almost accurate, now, is better. Systems that enforce an “always accurate, right now” paradigm are difficult to scale because of their dependence on transactions. Croll espoused the virtues of eventual consistency. In an eventually consistent model, not all of the system components have immediate access to the most accurate data. As an example, when a person deposits money into a bank account, they are often told that these transactions will be shown in the system within two business days. Reconciliation of the customer’s account will eventually flow between all systems that need access to the data.
- You can have any performance you want, as long as you’re willing to pay for it. Response latency of an application may increase as the workload increases. Croll points out that when organizations deal with constrained capacity, an unhealthy competition for resources can follow. However, in a cloud-like environment, users can simply pay more for access to more capacity. However, Croll astutely warns cloud consumers to carefully observe service contracts and invest in monitoring capabilities in order to closely track the performance of the commodity, mass infrastructure that cloud vendors provide.
- You realize that data is the center of gravity. Data, says Croll, is the lock-in technique for each cloud vendor. Organizations will drift towards the location of their data. VMware’s Dave McCrory discusses data gravity in depth and stresses that the need for low latency data access will drive applications and services closer to the physical location of data.
- You know the castle walls don’t work when the villagers are roaming the countryside. Perimeter-based security has become less meaningful over time as workloads become more mobile. According to Croll, the workloads need to carry their own protection. Simply protecting the network that contains data is no longer adequate. Data control needs to move from servers to files and even atomic units within the data itself.
- You’ll code data, infrastructure and application all at once. Through the practice of DevOps, software teams are automating their infrastructure creation and even requesting their environments as part of deployment manifests. As an example of this concept, platform-as-a-service offerings like VMware’s Cloud Foundry allow you to build solutions where the application code, data services and infrastructure host are all part of a single application deployment package.
- Your business cases are fire, aim, ready. Traditional IT requires upfront return on investment (ROI) analysis to decide what to invest in and deploy. Now, Croll says, software teams can deploy first, analyze the impact and grow or shrink accordingly. This principle is exhibited by online game giant Zynga who recently explained how they deploy new games to the cloud in order to figure out popularity and expected load of a game. If the game is considered viable, it is then moved from the cloud into their internal data centers. Cloudy environments provide an ideal setting for rapid prototyping as users refine requirements and business owners determine feasibility of a solution.
Croll demonstrated that conventional thinking associated with enterprise application development will not successfully transfer to a utility computing model. Applications need to be service-oriented and architected to support horizontal scaling and dynamic underlying infrastructure.