When people invent or improve significantly new services or technologies, they are in general focused on their domains (especially software vendors who are tempted to cater to their market first). Cloud computing is no exception. IBM, For example, defines the Rainmaker technology as software and hardware that work together to help enterprises create clouds. And, as usual, the devil lives in the details, and software and hardware will work together in a very proprietary way.
Today, we can no longer afford allowing the market to decide again for us, the end users. Like for the financial crisis, a lack of regulation generally leads to an enormous disaster. If we have the same approach for cloud computing, businesses will lose most of the key benefits they could potentially get. This paper is modestly intended to share my thoughts on how agility and new market rules are changing deeply the way we "consume" IT.
I would like to call for the creation of a user oriented and independent cloud community. This community would centralize and leverage talents to define clear requirements on how Cloud should be from a user standpoint, but also to enable better collaboration between current open source cloud projects. We need to do everything we can to avoid a new de-facto Yalta in Cloud Computing (several companies cannot govern the Cloud world: Amazon, SalesForce.com, Google, Microsoft, and whatever IBM or HP will come up with). We have seen what happened with SOA.
Information Technology is dead, long live Agile Business Technology[1]
Business needs to be agile to survive and requires "open, easy to deploy and interoperable" IT solutions. In most companies, working outside the IT sector itself, Information Technology is no longer a concern, long live Agile Business Technology. A typical product management team is always searching for the perfect technology at the perfect price at the perfect moment for delivering the perfect product at the perfect time to the perfect channels for the perfect price. "Perfect" is a business term that can be mapped in IT wording by 'agile', 'Quick' and 'cost efficient'. And let's be honest, we all know that Software as a Service (SaaS) is used as a Line Of Business (LOB) to reduce internal IT resources and IT costs. Beyond cost, LOBs want to take back the control of their destiny by selecting and configuring the tools they need "on demand". Business Technology, a mix of customizable business processes and their implementation in the Cloud, is here to stay.
Agile pervasiveness will impact IT processes and organizations, IT development and test tools and technology and finally IT operations and infrastructure. In the following sections we will present some examples to outline how many different kind of forces converged to a focal point: cloud computing.
Pervasive agility in IT processes and organizations
Agile became a word that my organization discovered two years ago, and we will never build software the same way. Approaches like Scrum, XP, and Lean management both improved the quality of the assets produced and decreased time to market. When applied with a real willingness to succeed, those process frameworks surface organizational dysfunction and provide significant results.
Process frameworks always need to be customized and adapted to a particular environment, but always follow some key principles. The principles generally cited are: communication and trust between members of the team, integration of business (product owner) within the IT team, release and deploy software often within regular and timely defined interval (sprints), and look continuously for any kind of wastes and ways to discard them.
Pervasive agility in IT development and test technologies
Let's review some of the recent advancements in IT development and test technologies that can be considered as Agile catalysts:
- Dynamic Software Service Lifecycle Management - Software service lifecycle can be managed more dynamically within a particular technical domain (OSGi for example for Java based services or OS services with micro-kernel)
- Application Virtual Machines - Java JVM and Dotnet DLR made it possible to create the web as the platform and to decouple programming languages and execution platforms. An application can also be isolated in a coherent OS dependent package and re-deployed on demand (see VMWare thin app or AppZero for instance)
- Dynamic Programming languages - Software development is eased by new generation of native dynamic programming languages (like Ruby), by adaptation of native language for a particular Virtual machine (JRuby on the JVM or IronRuby on the DLR) or by dynamic programming languages on top of virtual machines (like Clojure or Scala integrated in both Java and C# code)
- Testing Tools - Tests are still under heavy consideration, since it has been proven one of the key elements of sustainable and Agile IT. Testing is also time consuming, error prone and costly financially. Today you can "code directly" or "express" Functional Tests, Unit Test, Mock-test, Load Test, Stress Test, Behavioural driven test, etc. Most of the tools (Fitnesse, Selenium, Fitnium, jMeter, The Grinder, etc.), or test API (xUnit, Rspec, jwebunit, DBUnit, etc.) available on the market are mature (most of them are free). Load testing is now also offered on the cloud as a pay to play basis to best simulate real user spikes (like SOASTA, BrowserMob, KeyNote, LoadStorm, CloudTestGo, etc.)
- Continuous Test and build - The linchpin of agile backed up by solid sets of integrated tools. Hudson or Cruise Control manage the test and build tasks and integrate with Sonar or Xdepend for code quality evaluation, without forgetting Ant or Maven (with Nexus or Archiva for managing the build artifact repository if needed), Subversion, Checkstyle, Findbugs, Cobertura, PMD, FxCop, etc. Commercial suites like Electric-cloud or Atlassian or providing simple to install and already integrated build and test platform. Code quality is now also available as a service (like Kalistic or Metrixware)
- Model driven approach - Object Management Group (OMG) is still advancing the UML specification (V 2.1 now), and UML profiles (like recently SysML to describe software deployment and infrastructure). It is worth noting that OMG is now more and more involved in standardizing Business Related Technologies. Microsoft invests heavily in OSLO and UML will be implemented in Visual Studio 2010
- Model driven engineering tools - Tools leveraging model driven approaches are numerous on the market and cost effective after two or three projects. For example, look at Orchestra Networks ebx Platform for generic model driven Master Data Models (MDMs), Obeo or Sodius for automated application development, Blu Age Software and Metaware for automated legacy modernization, E2E for direct model execution of modeled services
- Software as a Service (SaaS) - Why developing and hosting your application on premise, if you can share it with other customers and just pay a fraction of the cost to use it? More and more companies are adopting CRM on demand (with salesforce.com, or Oracle CRM on Demand), Agile Planning with Rally Software, or Talent Management systems with SuccessFactors or Plateau Systems)
- Integration as a Service - Boomi or CastIron are already offering a great number of adaptors to integrate with or between SaaS applications. Obviously, all major software integration vendors in this area will revitalize their offer soon (Informatica on demand, TIBCO Silver and Microsoft Azure are going that way). It is nevertheless surprising that open source integration suppliers (like WSO2) and their ecosystem did not provide a true cloud offer yet
Pervasive agility in IT Operations
Buying a server in a datacenter and having it installed is usually a 3 month endeavour. And I am not even talking about installing a DBMS cluster or setting up a VLAN. Installing several applications on the same server can lead to some side effects (version of software to be used, configuration conflicts) not to mention organizational warfare.
Agility in IT Operations is coming from new generation datacenters, Green IT constraints (cooling and energy), virtualization techniques (increasing virtual machine densities on hardware), and shared infrastructure (load balancing, storage, firewall, security appliances, etc.).
Agility is also created by extending the agile continuum to embrace IT operations (the approach could be coined Continuous deployment).
Let's see how quickly the Infrastructure landscape evolved and how it is being offered today as a service (Infrastructure as a Service):
- CPU Virtualization - CPUs have several cores, and new CPUs offer virtualization instruction acceleration. You can require several dozens of CPUs for running your workloads on Amazon, RightScale, ElasticHosts, Rackspace, etc.
- Server Virtualization for Private Cloud - Hardware servers are now virtualized and may be deployed and cloned on demand. Platform VM Orchestrator, VMware VSphere, Citrix Cloud Center, and Red Hat Enterprise Virtualization Manager are commercial tools for virtualized services management, so aimed at building private clouds. Private Clouds can be hosted in the Cloud or based on open source platforms (like Enomaly, Eucalyptus or Nimbus)
- Virtual Machine Portability - Interoperability is not only about standardization of interfaces between clouds, but also about portability of virtual machines. The DMTF Open Virtualization Format (OVF) is a first step in the right direction, but this was not created with the Cloud in mind, so the war on standards is far from over. Conversely to what is said, converting your VMs from Vmware VMDK to OVF, to Xen, or to Amazon AMI and making it run everywhere is not trivial
- Network as a service - The network must be enabled as a core infrastructure service offering fully automated lifecycle management of network assets (like IP address, DNS names, etc.). Network services should be agile and workload aware by leveraging the VM's expressed policy requirements
- Lightweight Execution Platform - Today's trend is to provide a fully integrated and lightweight platform to be offered as a service (they are named Platform as a Service, or PaaS). Several companies are trying to change the rules of the software development and delivery game like Google with App Engine, Microsoft with Azure or Force.com with AppExchange and Cloud Platform. Others are available both on premise and on demand, like PHP on Zend, Java on SpringSource and MuleSoft, and Rails on Heroku, Aptana or EngineYard
- Database on Demand - DBs on Demand are split in two groups: key/value DBs like BigTable as a Web Service and Amazon SimpleDB; or relational DBs like MySQL (on Joyent or on Amazon EC2) or SQL Server (SQL Azure)
- Cloud Integrated Platform - Hybrid cloud providers can create and manage your public and private Clouds (like Abiquo, OpenNebula, Elastra or Appistry CloudIQ)
For more information on the Cloud revolution, you can read the recent CSC Leading Edge Forum (LEF) report on the subject.
Cloud enablers and inhibitors
In the Cloud, deployments and operations can be more agile and better linked in the same process continuum. But does it mean that we can create an application and deploy it on Windows Azure or Google Application Engine? Do we still need to care where the application code will be deployed and how many CPUs it will require to sustain the load? Do we still have to manage the application -ilities, which is time consuming, error prone and requires additional and unnecessary code in my application? Are the application's business and technical events well designed and correlated in order to deliver the elasticity needed? I think you already know the answer. That could be the reason why LOBs did not jump on the bandwagon and adopt cloud massively yet.
So what are the major inhibitors to cloud adoption?
- Cloud Technology is not mature enough - Cloud Technology proposed are still too often in beta mode, mainly proprietary, not interoperable (an example of an interoperability test can be seen here) and not adapted to IT operations yet (today it is mainly targeted to developers). So moving an application with its data from a PaaS supplier to another requires lots of work. Vendor lock-in syndrome again
- Cloud Technology Research is in its infancy - The EU commission just launched the Reservoir project to "provide a foundation for a service-based online economy" while HP, Intel, and Yahoo are sponsoring the OpenCirrus "cloud-computing research testbed designed to support research into the design, provisioning, and management of services at a global, multi-datacenter scale"
- Intercloud is not fully Meshed - In a flat world, every solution needs to have a global reach from its inception. The major players in the domain are still investing massively in their datacenters and trying to install them around the globe (and they should be green!). Agile network infrastructures are key to enabling cloud asset agility, portability, and replication and to create higher level services like Disaster Recovery and/or Fault Tolerance (see infoblox webinar for more details). As of today, we still have lots of blue sky, between the clouds. In addition, there is no solution yet to implement multicast
- No big server or Database available yet - It is nearly impossible to provision very large machines (for example with 12 CPU and 64 Gb of continuous memory) or extremely large relational databases
- Cloud cost model is elastic - Today, it is very difficult to forecast the cost of cloud assets, since they will adapt to the demand. The pay by use model instead of provisioning for peak model (and of risking over-provisioning which means underutilization) is hard to sell to the corporate finance and IT audit teams. And, since finance governs the world, until we can make them confident, they will resist. That's also why micro-companies are the first users of cloud, because their structure is elastic and grows with the business (their budget is elastic too). Anyway, all studies show that on demand cloud infrastructure steady state costs (without long term contract or specific negotiation) is in general more important that if you had your own hardware (but price will drop in the future)
- Cloud Service Level Agreements (SLAs) are nearly impossible to achieve today - Business-critical application owners do not want to take any risks, and nobody will blame them. Some research projects are however already launched (See SLA@SOI for example)
- Security and compliance are under scrutiny - We are moving from data centers to centers of data (private, public, hybrid). As usual, it will take some time to adapt the cloud to some specific stringent security requirements (see the Cloud Security Alliance). For example, the US government requires that suppliers' data and applications should be located on their soil (and Google just created a dedicated datacenter recently to meet this particular requirement). The EU is also very careful with anything related to the data privacy of its citizens
- LOB Organizational changes are deep - Who will do the job that was done previously by system and DBA admins? How is ITIL coping with elastic changes? If I experience an outage, how will I detect it and react to it?
Those inhibitors are pushing the major players in the field to define their own services first, even when proprietary interfaces or technologies are required. Their objective is to take the lion share of the market as quickly as possible and to impose their standards. As a user, what does this means to my LOBs? The risk of vendor lock-in is too important today to justify such a move, especially since it is not backed up by attractive cost models. Except for TIBCO, which is still not clear on how it will price Silver, most of the other vendors follow the same approach (at nearly the same price).
Avoiding the IT Systemic crisis
In order to instill more competition and let emerge new innovative and independent actors, we must:
- Convince all cloud vendors to build commercial cloud offers based on different class of users (government, education, industry, defense) and their needs (we should price differently a massive computation grid, and a load balanced set of shared web servers implementing an extranet portal, etc.).
- Find the right organizations to ensure the proper governance of a growing Cloud ecosystem with the objective of avoiding an IT systemic crisis in the future.
- Ensure fair access to cloud resources whatever the size and location of the clients (from a small developer working at home to an international company with tens of thousands of employees). Global cloud deployment will contribute to the worldwide diffusion of agile and sustainable IT resources and contributes to reduce the North-South technology gap.
- Ensure the sustainability and interoperability of the solutions proposed by cloud suppliers to reduce the risk of losing the investments the LOBs made in the Cloud.
- Guarantee the cloud providers that comply with regulations some incentives (reduce some of the taxes they pay for example, giving them some part of key market like education or government, etc.).
- Plan and fund training and coaching of thousands of IT people that have lost their job or could lose it in the future (called by economics Destructive Creation).
- Fund and support powerful and independent open source ecosystems (like Apache, or OW2 that just merged with OSA). The pace and numbers of open source companies or projects teams bought recently is a clear sign of market consolidation (SpringSource is the perfect example) and will continue in the coming months. We should protect some key assets and maintain them in the public domain when possible (for example, key Sun technologies and products like Java, mySQL or OpenSSO).
This time, let's avoid constraining the future of business agility by having to go over islands of cloud proprietary solutions, interconnected to enable the maximum possible profit (like for airlines, where some lines are cancelled today to achieve this goal) and not for enabling the best possible mesh (like done for the Internet today). Innovation comes also through the Hackability factor (Get, Understand, Improve, Invent)…
Enabling Business Technology to SHINE over the Cloud
Autonomous stakeholders requiring agile business processes are changing the way Business and IT groups should collaborate and deliver. The existing information systems are so complex, that it is nearly impossible to make them react accordingly to the business demand. It is the right time to eat the elephant[2], one bite at a time. Information Technology is no longer a valid path for success. IT should be injected within the business in order to create Business Technologies. Smooth change management and adapted organizational evolution towards an agile world is possible if you follow some key principle, some of the most important being resumed below, (and coined the SHINE principles):
- Small is beautiful - Small teams in charge of all aspects of a business service. Work with small teams (6 to 10 persons) in charge of well defined business domain subsets from inception to production. So, if you want to use the Cloud, the best approach is to move to a service oriented approach (if SOA is dead then do a service-based approach) and implement business services in vertical organizations. It's agile from the top to the bottom. Use continuous deployment, meaning testing and deploying often and securely, your application. Automation is now possible and recommended
- Heterogeneity is your friend - You're free to use the most adapted technology to the business needs, but you will be fully in charge of it. Use the language you want to develop your application, the platform (as a service or not) you want, the people you need, etc. In the end, you will only be responsible for the time to market and quality of your service
- Interface with Open API - Increase the agility wherever possible, and especially at the business service or process edges. Define clearly the semantics of your messages, and create understandable, secure (see Google Secure Data Connector for example) and easy to use interfaces. Reuse open standards and propose Open API to make your service SHINE in others' clouds. Also think about inter-cloud interfaces for the future (like Unified Cloud Interface or OCCI for example). Based on business value chains, those services could also be dynamically or statically orchestrated to offer more complex or more granular services (that can itself be offered as a service)
- Nudge your cloud - Create coopetition (cooperation and competition) when needed in order to provide the best service to your clients. Amazon is letting suppliers use its platform and propose the pricing they want for any goods. Use my platform, use my process, use my master data, increase my market share, make clients and suppliers use my service, give me a fee anyway and make my clients happier. And remember the Long Tail, agile does not mean short term!
- Elasticity, agility, resilience - To benefit from the cloud computing model, business application and services should be (re)designed and (re)built in order to incorporate elasticity at their heart. Elasticity is needed to adapt to short-term spikes (resilience) and to enable periodic batch jobs with massive resources needed for computation (agility)
I recently heard about the example of Lokad, a small French company that SHINEs. Lokad is delivering Business Technology, and is specialized in forecasting software for sales, demand and call volumes. With cloud they are able to do forecast computation for a major French retailer in one hour (business SLA imposed by the client!) instead of several at a fraction of the cost. Lokad invested in Azure, even if still in beta at that time, and did rebuild its application using Microsoft web and worker service definitions. They were able to provision as many resources as needed (but not always as powerful as they will like, 10 single-CPU VMs with 2GB each does not equal a 10 CPU machine with 20 GB), to do something that was not possible before (due to computation time), and even now support to create and implement more complex algorithms to provide better forecasts (since computation time is no longer an issue and the software architecture is not the same).
Editors, like IBM, are also working on new business technology services that SHINE. Massive Mashups (M2), is an extension of the mashup paradigm that "integrates gigabytes, terabytes, or petabytes of unstructured data from web-based repositories, extracts and Enriches that data using the unstructured information management architecture you choose (LanguageWare,OpenCalais, etc.), lets you Explore and Visualize this data in specific, user defined contexts (such as ManyEyes). IBM also developed an adapted agile process to help clients use M2. It begins by two to three hours briefing to identify the needs and see how to best use M2. Then, 1.5 days to setup the base service and 2 to 4 days to finalize implementation.
Cloud OS for managing Business Technology
The first requirement LOBs are expressing is a systemic vision for business technology. How do you enable different stakeholders to access and understand the agile and elastic assets of the company? Accepting the idea that the war today around cloud standards and the current lack of interoperability will be tackled, what can be done?
We argue that a new kind of distributed operating systems should be invented to enable "Just in Time" IT. It will then let you command and control your cloud (and not only one machine). This Cloud OS will be capable of doing operations on any entities managed in the cloud (like creating a server, adding a database, transferring data, deploying applications, etc.). We envision that the following key entities will be managed by this OS:
- Server - provide CPU cycles
- Application - connects portfolio asset with deployed software assets
- Storage - offers low level binary storage (can be SAN, NAS, a Content Management System)
- SVN - used to manage code/configuration versioning
- Service - managed as business executable assets, and deliver a value (hard dollars)
- Interface - used to define any communication between executable assets
- Cache - Several caching tools and techniques could be used based on the need. Could be transparent to an application if part of the Platform execution core
- Event and rules - for enabling and controlling elastic behavior
- SLA - used to define and manage the SLA
Let's take a simple example to express the level of control we could envision as a Scrum user story. A new project with two applications (one on premise and one on demand) is to be launched in the company. We need to run this project and make it a reality on the company cloud. What would be great is to use a command line interpreter for this work. So, the first objective is to have one set of commands to be used and to have proprietary cloud adapters integrated in the cloud OS to translate commands in the right API (seems we're back to EAI …). To manage business technology, and offer a systemic vision, we should gather information about the Application portfolio, the Application code to be deployed, application configurations and nonfunctional requirements and the elastic Infrastructure to be used. All being managed and aggregated in one place, the Cloud OS (also called the meta-OS by 3tera).
To build my small example in pseudo code, I tried to reuse some commands already available in cloud suppliers' APIs, to add the data needed for managing the portfolio and to add attributes required for enterprise architecture needs. What is needed here is to find the most common denominator across several domains:
CREATE DOMAIN myPortfolio ; my logical domain name composed of two applications ADD myPortfolio APP1 Critical OnPremise ; APP1 is a critical application - app. portfolio ADD myPortfolio APP2 Maintain OnDemand ; APP2 is in maintenance mode - app. portfolio ; Let's Begin with APP1 APP1 CREATE ; Definition of my APP1 APP1 IMPLEMENTS CustomerOnBoarding ; Link APP1 with Business Process APP1 STATUS = Production ; APP1 is in production mode (portfolio) ; Automated cloud based deployment APP1 SVN https://mycompany.com/svn/myPortfolio/app1.xml APP1 REQUIRES JVM 8.x ; (we are dreaming ..) APP1 SCALABILITY HORIZONTAL DYNAMIC ; Elasticity, create CPU when needed APP1 PREFERRED AWS, RIGHTSCALE ; Elasticity, Preferred Cloud suppliers APP1 RULE "COST < $200 PER DAY" ; Elasticity, Rule for elastic cost containment APP1 SLA = 99.9 ; Elasticity, Rule for elastic cost containment ; Automated cloud based DBMS creation and data load APP1 DATA SCHEMA https://mycompany.com/config/myPortfolio/Data/DataSchema.xml APP1 DATA LOAD https://mycompany.com/config/myPortfolio/Data/DataDump.dat APP1 DATA COMPUTE GRID MAPREDUCE https://mycompany.com/config/myPortfolio/Data/grid/MapReduce.cfg APP1 STORAGE EXTEND ON DEMAND ; Elastic Storage APP1 STORAGE DR ENABLED ; Storage with Disaster recovery ; Security information APP1 SECURITY SSO APP1 SECURITY OPENId APP1 INTERFACE WITH APP2 USING HTTPS ; Service call APP1 USE APP2 ; dependency management at architecture level APP1 USE Service1 https://mycompany.com/config/myPortfolio/service1.wsdl APP1 USE Service2 https://mycompany.com/config/myPortfolio/service2.wsdl ; Monitoring APP1 MONITOR on MonitTweeter #APP1 ; monitoring is done through a Twitter-like interface APP1 END ; Then APP2 ATTACH APP2 https://www.app2.com/ ; APP2 is in on Demand 0 APP2 IMPLEMENTS CustomerProfile ; Link APP2 with Business Process APP2 NEED OpenID ; APP2 is using OpenID APP2 Certificate https://mycompany.com/certificate/app2.key ; Requires certificate APP2 END END Domain MyDomain
Cloud OS - Back to the ground level
Business Technology innovation-driven virtualization is holistic (server, storage, and network). The Cloud requires parallel and distributed operating system competencies that will force us to forget about the local operating system centric vision (Microsoft Windows Seven may well be the last operating system of its generation). We should also reuse a lot of what is already available around distributed monitoring, configuration, and dynamic mapping of infrastructure.
One obvious proposal will be to create a specific lightweight Linux-like micro-kernel that will be optimized for cloud. It should be able to run natively on several types of processors and to run most of the current hypervisors on the market (or at least to interoperate with them through the network). Each micro-kernel will be run on each processor or CPU core. A central system will also collect data and correlate those dynamically in order to adapt the distributed system to the current state (gathering a global state in a distributed system is not trivial) and to issue remote commands when needed (distributed OSGi on steroids).
Embryonic initiatives have been launched all over the world to solve some of those key issues. Some of the most promising include:
- Deltacloud - Open source (LGPL) API that abstracts the difference between clouds. The Framework makes it easy for cloud providers to add their cloud to the Deltacloud common API. It could serve as the integration layer of the Cloud OS
- Kaavo's Infrastructure and Middleware on Demand (IMOD) - provides an application which masks the heterogeneity of the cloud. All deployment and configuration information is stored into a single file. This file is split in two parts. The first part defines the static artifacts of an application (tiers, servers) and the second specifies the actions that the IMOD engine is to perform when the application is being launched. This flow control of actions is defined in the form of Velocity templates
- Cloudloop - a universal, open-source Java API and command-line tool for cloud storage, which lets you store, manage, and sync your data between all major providers
- The 3tera AppLogic system - used to create a private cloud. It is a cluster of interconnected systems running Red Hat Linux, Xen, and the AppLogic orchestration system. The fundamental building block, called an "appliance", both the virtual machine (running the guest operating system) and the application
- EU commission funded XtremeFS - a new open source cloud file system and the project studies how to combine the advantages of peer-to-peer (P2P) communication architecture with the power of Content Delivery Networks (CDNs) like Akamai
- The Process Virtual Machine - a simple Java library for building and executing process graphs. This serves as a basis for all kinds of workflow, Business Process Management (BPM) and orchestration process languages
Enterprise Architecture will never be the same
The Cloud will force enterprise architects to have a systemic and an as dynamic as possible vision of elastic systems. Previous approaches where some IT landscape data were loaded statically (or drawn) in dedicated tools (like Aris, Mega, Casewise) will not be possible anymore. The integration of Enterprise architecture, application portfolio and infrastructure management is an absolute need to keep a global vision of the state of a domain and to see its impact on the business value chain.
Elastic applications running on autonomic infrastructure will challenge old ways of designing, deploying, testing, documenting and maintaining IT systems. We will nevertheless need to create hooks to make our IT agile enough to quickly enact new business decisions (before the big jump to the Cloud or its successor).
Business process description, integration, and optimization will be more and more important in the future, creating new jobs opportunities in the LOBs organization. On the opposite side, technical architect, infrastructure experts and planning and portfolio managers jobs will become more and more intertwined.
Conclusion
I hope that by applying the simple principles of SHINE, you will be able to use Cloud Computing at your own pace; having understood that Cloud Computing is still immature, and can require some rework on the way as applications are developed and deployed. It is, however, the right time to prepare your enterprise for Cloud.
Going from virtualization to private cloud is basically a step to provide self service capabilities to the application owners. It increases flexibility, improves time to market, and also increases the management complexity, as it adds another layer of abstraction. Processes and tools are still needed to manage this layer.
The biggest advantage of using cloud is that you don't have to build the capacity for handling exceptional peak. You can nevertheless design, on purpose, for elastic demand and benefit from it to accomplish work more quickly or at a fraction of the cost.
If we leave the Cloud in IT vendors' hands, as they try to convince the business to use it for reducing cost (which has still to be proven) we will, one more time, lose the opportunity to create a continuous agile company and internal IT will simply be externalized to slightly more agile external suppliers. It is time to setup or advocate the organization that can ensure the governance of a growing Cloud ecosystem. The objective here is to avert an IT systemic crisis in the future.
I do hope that, in the near future, an operating system like the one I just sketched will exist. I do not foresee severe technical integration issues that can prevent its realization. Furthermore, adopting an agile approach, where the granularity of entities and their functional and nonfunctional attributes are well scoped, could result in easy quick wins. How this operating system will be able to (re)act dynamically to several kinds of events and how clever it will have to be to avoid over-engineering is still subject to researches and discussions.
[1] "My View: IT To BT", by George F. Colony, Forrester CEO, August 18, 2006 (http://blogs.forrester.com/colony/it-to-bt.html)
[2] Eating the IT elephant, moving from Greenfield Development to Brownfield, Richard Hopkins and Kevin Jenkins, IBM Press, 2008.
About the Author
William El Kaim is Lead IT Architect for a large European Travel Agency. He is based in Paris, France and blogs at http://blog.resilient-it.com/
Disclaimer: All opinions expressed in this document are his and do not reflect the opinion of his employer.