With the fast-pace of cloud changes (new services, providers entering and exiting), cloud lock-in remains a popular refrain. But what does it mean, and how can you ensure you're maximizing your cloud investment while keeping portability in mind?
This InfoQ article is part of the series "Cloud and Lock-in". You can subscribe to receive notifications via RSS.
Key takeaways
|
There's no shortage of opinions on the topic of technology lock-in. Where does it really hurt you? Is open source the answer or full of false promises? InfoQ reached out to four software industry leaders to participate in a virtual panel on this topic:
- Joe Beda is an Entrepreneur in Residence with Accel Partners. While at Google, he started transformative services like Google Compute Engine and Kubernetes.
- Simon Crosby is the co-founder and CTO of Bromium who previous held leadership positions at Citrix and Intel.
- Krish Subramanian is the SVP Products and Strategy at CloudMunch with a long history in the cloud community.
- Cloud Opinion is a parody account known for insightful commentary about cloud technologies.
Below is the lively discussion that played out among the participants over a five day period.
InfoQ: Give us your definition of technology lock-in, and who you think really cares about it.
Joe: Lock-in is any technical or process decision that limits your degrees of freedom in the future. The only way to avoid lock-in is to not build anything. With this in mind, the name of the game isn't avoiding lock-in but recognizing, characterizing and managing it.
Standards and open source are effective lock-in mitigation strategies. Both of them open up options that don't exist under closed systems and protocols. This is why lock-in is often associated with taking a long term vendor dependency.
I'll define a new term -- "lock-in event horizon". This is the point where the switching cost of changing a decision is greater than what the business can bear. At this point the decision is essentially set in stone. This is why many banks still run on outdated and expensive mainframes.
Cloud Opinion: Technology lock-in is when your technology choices make it difficult/expensive to adopt a better tech in the future. An example from the 2000s is where organizations grappled with whether to adopt JEE or .NET. Funnily enough, for all the analysis paralysis done during that time, it did not turn out to be a big deal. You were right whether you chose JEE or .NET - what mattered was you chose one quickly and moved onto building apps.
A vendor lock-in is a derivative of technology lock-in. An example here is after choosing JEE, some organizations have chosen either a IBM Bluestack or Oracle RedStack. Organizations that chose BlueStack or RedStack, have suffered in their inability to adopt new technologies. They had to slow down their innovation to the innovation pace of the vendors. These customers are now slowly flocking to new PaaS alternatives. (choosing .NET was both a technology lock-in and a vendor lock-in at that time).
Vendor lock-in is inevitable, what IT executives have to do is to evaluate it as any other risk. Beware of the inability to act due to analysis paralysis.
Krish: Let me first define the conventional wisdom (I prefer to call it legacy wisdom). According to legacy mindset, it is ok to have certain lock-in (and the associated cost) for immediate value (convenience). It was a reasonable approach in the past when technology was evolving slowly and any refresh happened once in few years. The cost of unshackling from the lock-in tentacle was smaller compared to the value gained in the preceding years. But, with the technology changing exponentially and business pressure forcing IT to always be in a state of continual evolution (I would like to use the term Living IT to describe this), such a legacy thinking does not work. We need to redefine the value of adopting technology to include immediate value + flexibility to evolve. In the absence of the second term, the value you are getting is not optimal.
Another aspect of legacy thinking is that people immediately imply lock-in to be vendor lock-in. In my opinion, feel free to give the vendor you love all the money you have. It is not going to kill your business. What one shouldn't be doing is to make architectural decisions that tightly couple your IT to certain tools, architecture and process. Such a lock-in is more dangerous than vendor lock-in. A tight coupling with technology, architecture and processes makes your IT dinosaur of modern enterprise. The evolutionary forces of IT is destined to eliminate such dinosaurs.
As Joe said, there is no "No Lock-In" nirvana. If anyone assumes that one can avoid any kinda lock-in, they are drunk at work. It is about minimizing the cost of lock-in. Such an approach is needed because IT evolution today is more continuous than in the past where there was ample time to catch up. If you are a CIO leading an organization without this new mindset, your organization is destined to go down in the history. Loose Coupling" is the keyword here.
Simon: Lock in is not a limitation in terms of degrees of freedom. It is a context that is relevant in the present only. It is only perceptible or evaluated at the point where the user of a technology wishes to change and the existing vendor cannot serve their needs. It is not possible to predict with accuracy. Foreseeing what any vendor will deliver is impossible because tech is changing so fast.
InfoQ: What technologies have a deceptive level of lock-in and a high switching cost? Something you might not first think of as creating tight coupling, but it's costly to swap out.
Krish: I don't want to point out any specific technology as ones that have tight coupling. But I would say that the higher you go up in the stack you will have more convenience and but more lock-in possibilities. The key is to manage the lock-in with the need to keep the flexibility to evolve. As Joe said in the previous questions, standards and open source helps. If you use a higher order service (say the technology under the moniker serverless computing", make sure they are at least multi cloud till standardization is brought in. Use any tool that solves your problem but keep flexibility to evolve as the key mantra for your strategy.
Joe: It is important to separate out switching out implementations with switching out interfaces. Standard interfaces (even at the conceptual level) reduce the risk. Users can deal with problems without having to rewrite everything. The core concepts around VMs and object stores are well understood enough that the switching cost is low.
Conversely, the more unique the system is at the conceptual level the harder it will be to switch. Unfortunately, these unique features often provide enormous value.
The nastiest surprises are those systems that appear to be a safe bet but often end up becoming a nightmare in production. Developer focused storage systems are notorious for this. They can be super easy to get started with and provide a great experience at the start. Often times issues with performance, stability and operability will only show up after the application is launched and taking significant traffic.
In many developer facing tools there is a tendency to focus on the experience over the first 5 minutes at the expense of the next 5 months. I'm always leery when a slide at a developer conference makes things look *too* easy. Often this voodoo happens at the expense of a well reasoned and discoverable underlying architecture. The easier things look, the more likely there will be huge cliffs if you need to stray off the paved path.
Simon: I think Joe has put this perfectly.
In the area of surprising lock-in I am firmly of the view that the hardest situations are those where proprietary data stores including database software and tightly coupled applications are used. Customers must insist on open systems and APIs where they are concerned that depending on a single vendor could leave them vulnerable to price gouging and unable to adopt new technology fast enough.
But let's be fully aware that writing so called serverless applications using platforms like AWS lambda is sticky beyond belief. These apps will never move because it would require a complete rewrite. And app logic is probably more difficult to recreate on a new platform than moving data from a legacy storage system.
Cloud Opinion: Agree with Joe & Simon on the layers of the stack are most ripe for creating a lock-in. From a business perspective, components that would be most risky to change and have highest impact on customer experience create high switching cost. For example, many businesses are still locked-in to AS/400 green screen because moving the data from those systems has a high cost on customer experience and could be disruptive to business. Cost is also influenced by the ability to find right resources that understand the technology being replaced. One way to mitigate the lock-in risk is to leverage components that support Open standards. Open standards based on well documented interfaces help a lot.
The caveat is that standardization of Interfaces take time as early markets often have integrated offerings with proprietary interfaces, so avoiding lock-in while using cutting edge technologies may not always be possible. For example, if you want to wait for open standards before consuming serverless computing, you are kinda stuck for next 3-5 years.
InfoQ: Simon brought up AWS Lambda. What do you think companies should do when deciding how to embrace proprietary cloud services like databases, messaging layers, or "serverless" stacks? Go all-in if it delivers the desired business value? Use, but isolate it in order to reduce the dependency? Ignore it until standards are in place? What's the recommended approach?
Joe: Lambda and serverless is nothing new from a lock-in point of view. Companies should approach this like any other technology decision. Namely, they should balance the lock in and the risk imposed by that lock in (which factors in the likely-hood they'll want to break out of jail) against the value they are receiving. This calculus is different for every company and every product. A start-up, for instance, has nothing to protect so the risk imposed is low and the value is often very high. If the lock in becomes a problem (like a PaaS that doesn't scale) that type of success disaster is better than the alternative -- not having any success at all.
The danger, however, is stumbling in to lock-in without doing this analysis. I think that systems like Lambda (or, say, stored procedures in your database) make this really easy. Users may start out writing a little bit of glue with Lambda and soon find themselves with a significant amount of code that is tightly coupled to AWS. This is exacerbated by the fact that Lambda doesn't stand on its own; it is defined by the set of events and services across the larger AWS platform. In this way there is no Lambda replacement that isn't an almost complete clone of AWS.
I'd love to see an effort to characterize lock-in in developer documentation. I'd model this on the old Google "Data Liberation Front". Let's call it "Code Liberation Front". For each system and API we would give it a score (take this as a straw man/example):
- CLF 0: Completely proprietary. There is a good chance you'll get sued if you try and create a clone of this system. (Think Oracle Java APIs)
- CLF 1: Open API specs. The company that created the APIs promises not to sue you for creating an independent version of that API. Slack is a great example here.
- CLF 2: Single alternate implementation. There exists an open or independent implementation for this interface.
- CLF 2.1: The alternate implementation is community run (think AppScale for GAE)
- CLF 2.2: The alternate implementation is vendor supported
- CLF 3: Multiple alternate implementations.
- CLF 4: All code is open and community driven.
It isn't enough to apply these labels on whole systems as independent support may drift as features are added. I'd like to see this integrated with developer documentation. Ideally you could run a "lock-in lint" tool to score how much risk you are taking on and head in "eyes open".
Simon: Though I like your ideas Joe, I think it is important to note that in the context of a "Code Liberation Front" there is just as likely to be a competitive "Liberation Front for Code" that pays homage to the same laudable goals, standards or interfaces, but advances the cause of an incompatible ecosystem. We see this everywhere: VMware "Open vSwitch" and the Open Daylight Foundation; Red Hat Linux and Oracle Unbreakable Linux; Ubuntu and Debian. The list goes on and on. Why? Even though the "open and free" argument may deliver flexibility and choice to customers, it doesn't serve vendors. There are no decent Open Source business models, and if the vendors "fight to the bottom" in their attempt to garner market share customers are eventually ill served by the resulting product. If it's not a commercially viable platform/product on its own, then you're in for a different kind of lock-in — the kind where you lock yourself in, throw away the key, and nobody cares enough to help.
The extremes are perilous: Single proprietary implementation and "Open Source Everything". In the middle, where reasonable choices can be made, very large vendors might be subject to regulatory oversight (think: Microsoft and the Consent Decree) since they have so much clout. This will apply also to AWS, Azure and perhaps Google Cloud. They will make out like bandits on volume, will likely offer a greater degree of commonality of function / service for a decent period of time.
Ultimately every business application has an expected return on investment. If the sums work out for the initial investment and return, then the business should make a rational, business centric decision and proceed. However, it may be useful to understand the fundamental shifts in pricing for the commodities (compute, storage, more complex services/products) that your application needs. Lock-in arises when massive rifts introduced by changes in technology base can turn a solution that in a legacy business context appeared valuable, into an expensive white elephant. This is the way of the tech industry — choose the curve to ride, and pray you're right.
Cloud Opinion: This is going to depend on your business. How important is it for you to compete using your IT as a strategic investment? Delivering value to business is more important than getting caught up in ideological lock-in debates. Often, adopting cutting edge technologies helps your organization compete in the market place than waiting for the technology to mature and open standards to evolve. Lock-in may be a price to pay to remain competitive in the market. However, it's important to assess the business value vs. potential lock-in cost, just as you would do with any other risks.
AWS Lambda specifically may not be ready for broad adoption at this time, not because of lock-in concerns, but because its tooling still has long ways to go to make developers productive.
Krish: I wouldn't say that we cannot use proprietary tools but we need to be smart about when and how to use them. Waiting for standards is futile, though it is getting better these days with open source and the standardization it is pushing through accelerated adoption of OSS.
Whether it is the use of proprietary database services or higher order services like Serverless Computing, you need to:
- consider the cost of lock-in in terms of longevity of its business value. Does it give me a long rope before I feel the pressure to evolve? If not, is there an open alternative that can be implemented without incurring much cost or time.
- ask if I can segment out the services that use proprietary technologies to be as small as possible so that it doesn't impact my IT evolution (in other words, can I make sure it is not a drag on my ability to compete in the market).
Also, lock in at the application dependencies like database services is different from the lock-in at the operational layer, like Serverless Computing. One need to understand the difference, evaluate the consequences on the ability to evolve rapidly and, then, micro segment it architecturally before embracing it. Embracing their technologies just because they are the next shiny object is suicidal, in my opinion.
At the risk of appearing repetitive, let me emphasize that loose coupling and, in the absence of standardization, multi-cloud should be the core elements of any modern IT strategy.
Simon: I want to make a very strong point about OSS: It is neither a standard, nor is it free of lock-in. It's just code that someone else wrote, that you have no control over. And use of random bits of OSS, from github or other sources, leaves you potentially locked in because nobody cares to fix your proprietary mess. OSS is NOT the answer to the lock-in challenge. OSS is an appealing approach for devs, who seek to re-use code. The risk is that they adopt poorly maintained packages from minor projects, potentially introducing bugs, security issues and more. OSS that is supported by a commercial vendor (either as a service eg: Amazon Linux; or by a commercial vendor such as Docker) has the potential to offer a safer middle ground, but again it's no guarantee.
Krish: I want to follow up on the OSS point Simon raised. OSS is not standards but, in today's world, it drives standardization more rapidly than proprietary software.
Also, using any OSS bit from Github is compared to the dangers of downloading any software from the web. Yes, there is definitely a risk but it is not just with the bits from Github but applies to any software downloaded from the web, OSS or proprietary. In the case of OSS, if you have the knowledge and time, you have a chance to check out the quality of the bits. With proprietary software, this opportunity doesn't even exist.
As a general advice, whether it is proprietary software or OSS, do the due diligence. While there is absolutely no visibility or guarantee of the proprietary bits you get from unknown vendors, OSS gives the option to do due diligence in a much better and thorough way by giving the source code bit and with the help of community in sites like Github (through ratings and data about forms, pull requests, etc.)
Simon: Does OSS drive standardization more rapidly than proprietary software? No it doesn't. There is no "standard" for Linux. There's just a "common denominator" set of things that you can usually rely on being present.
Also, any assertion that a user of OSS has the skill, time or expertise to review security and functionality is ill founded. The challenge arises from the presumption that bits found online work or are secure. In the case of actively maintained and promoted projects, this is more likely to be the case. But there is zero guarantee. Moreover, using random OSS bits does not give a user any assurance or remedy when they fail to meet expectations. Proprietary software vendors, and those that package and support OSS projects can be tied into SLAs, and this ought to be a minimal requirement if you are to avoid lock in.
I disagree about asking enterprises to do "due diligence." We are talking about enterprise consumers of software. Asserting that the consumer of an OSS package has the resource, insight, skill to perform due diligence is naïve. If you don't want to be locked in to someone else's bad / insecure software, use software (OSS or commercial) that is supported by a commercial vendor and mandate an SLA to address support issues.
Krish: Linux is not the only OSS software and if you read it carefully, I highlighted that OSS today drives standardization more rapidly than proprietary software. Docker is a pretty good example of the speed.
Also, you are imposing your own words on the time needed for checking out OSS bits. If you read my response, I have clearly highlighted the OPPORTUNITY OSS provided for such due diligence.
I disagree that enterprise customers don't have the resources or skill to understand the metrics around OSS. Companies like Expedia, Capital One and many others are actively contributing their homegrown software as OSS bits. Saying enterprise customers don't understand it doesn't apply any more.
Joe: I want to put a finer point on my comments in response to some of what Simon said.
First, with respect to OSS -- it isn't a panacea. But open source offers an option that you wouldn't have otherwise: you can always fork and own it yourself. The costs are high there but they may be acceptable when weighed against the business value. Obviously, for a small start up, forking a project will almost never pencil out. But for a large corporation, open source provides a range of options that are not available with closed source solutions.
Now, with respect to my wacky "Code Liberation Front" idea, I think I must have been unclear. This isn't an ecosystem but rather a way to characterize and rate how locked in any specific API is. That is part of the formula for evaluating the risk of using an API and going in eyes open. Just because something isn't open source doesn't mean that it isn't valuable, useful or a good option. But the risk from lock in is often times higher with proprietary technologies over open ones.
I think we are all in violent agreement on a few of points: First, lock in is inevitable and necessary. Second, it makes good business sense to lock yourself in to a technology if the return on that lock in outweighs the costs. Finally, you should make sure you know what you are getting in to and include lock in risk as part of evaluation of any technology (or features in a technology) that you are using.
Simon: Once you fork, you're on your own. Oh, that's called an enterprise proprietary solution. You're locked into your own jail. Enterprises seldom (if ever) have the resource to maintain their own forks, or to re-base from mainline. The value of OSS is the continued contribution of the community, rapid evolution, better security and functionality. But for an enterprise platform where lock-in is a concern, having one or more commercial partners who can support the OSS bits is perhaps the right answer here?
Joe: With respect to "those projects with options for commercial support from more than one vendor" I think this is an important point to amplify. Not all OSS is created equal. Ideally a healthy OSS ecosystem has support from multiple vendors. And while an enterprise may not be up to maintaining their own fork, new vendors can emerge that can play that role. A great example is MariaDB.
InfoQ: Randy Bias recently wrote that OSS helps change the vendor power dynamic, and can reduce switching cost. However, as Simon points out, OSS is its own form of lock-in and is far from the key to avoiding lock-in. Does open source play a role in your lock-in decisions? Do "standards" offer relief, but its own form of lock-in? Or is it really about architecture, not software?
Simon: I refer to Randy's views as "Randy's Bias" :). Certainly OSS changes the power dynamic, but ultimately we are talking about lock-in. We are dealing with enterprises being stuck on a particular vendor's platform for one reason or another. We are after the reasons. OSS is not really an answer in any way here, unless the customer is prepared to switch to their own implementation of the platform using OSS software and sacrifice support SLAs that they might have had previously. This is not a serious option for almost any enterprise. OSS is interesting because it is becoming a way to engage devs in rapidly creating platforms or components that are of value in many scenarios, and that can be supported by service providers or vendors.
Cloud Opinion: I think it's very important to separate Open Standards from Open Source. While, marketers would love to mix them up to sell their wares, they are different.
Open Standards can help reduce lock-in risk. Open Source does not help directly, but may contribute indirectly by making it little easier to find talent to work on that component/product. But the open source thought leaders' argument that OSS reduces lock-in seems to me like fine marketing disguised as technical advice.
Let me give an example from a domain near and dear to Joe. If you are doing federated authentication between different products, using an Open standard like SAML will reduce your lock-in risk. If you use a vendor specific API, it may increase your lock-in risk by making switching to a different product expensive. It doesn't matter that the API and sample code is on GitHub.
Now, let's take a different domain near and dear to Krish. If you are building your own "Cloud", you could use an Open source software like OpenStack. OpenStack is Open source, not an Open Standard. An Open Stack distribution from RedHat is a different beast than an OpenStack distribution from IBM. If you chose a vendor specific distribution of OpenStack, congratulations, you have created a lock-in (lock-in may not be your biggest problem though, talent retention might be).
Joe: I totally agree that OSS and standards are very different beasts.
But let's extend that OpenStack scenario. While it may not be seamless to switch from one OpenStack distro to another, the switching cost will be lower than migrating to something that doesn't have that shared DNA. In this way OSS has reduced the lock-in risk. While standards aim to eliminate lock in risk, OSS can be useful for reducing it.
In the auth scenario, if the API is open in a way that alternate implementations could be written (i.e. the vendor won't sue) then is is altogether possible that other vendors can implement that API and the API then becomes a de facto standard. This is what happened with the core of S3. When we were building out Google Cloud Storage we knew that there was a chicken and egg problem around tooling and APIs. The S3 API was simple enough that we were able to have an API compatible implementation (at least of the core parts) and easily leverage existing tooling. In this way the command line tool for GCS (gsutil) is based on boto -- a well known python library for AWS.
I propose a thought experiment. In my mind, Windows and Win32[1] is the ultimate example of lock in. It took a revolution of technologies (web and mobile) to break that lock in. How would have this played out differently if Windows had been open source? My guess is that it would have held Microsoft in check and the world would have had a softer landing.
[1]: Win32 is a great example of an API that is so complicated that it defies standardization. Even Microsoft doesn't have a set of specs good enough to do a clean implementation. The code is the spec.
Krish: This is exactly what I told in my response to the lock-in risk. OSS helps drive standards. Some marketers might conflate OSS with standards but there is no denying that OSS is in the forefront of most standardization efforts.
The problem with FUD makers on OSS is that they cherry pick examples to conflate their claims. In a services world open protocols are the key. Anyone who has done a Services 101 course could talk about it. However, open protocols are not enough. Look at AWS API (one close to Cloud Opinion's heart). We cannot standardize on it in spite of AWS being a market leader. Look at how they blessed Eucalyptus while not giving their blessing to CloudStack? With a proprietary stack behind the API, there is a huge risk of vendor interests going against customer interest. While OpenStack may have other problems, it's API is not vendor specific and you don't have to get vendor blessing to adopt it.
If anyone graduated to Software in the Services world 101, it is easy to see that an Open API is a safe bet if there is an OSS stack behind it. Does it guarantee the interoperability nirvana? No frigging way. Does it reduce the risk and increase the potential to standardize on an open API compared to a proprietary stack? Hell yes.
Randy Bias may be biased in his name but he has another post from the past where he makes a great case for why Open APIs are not enough and the architecture behind the API is important. I strongly suggest everyone to read that post of Randy. When you take into account the importance of the architecture behind the API, the value of open source behind open APIs will become even more apparent.
Making any claims that OSS doesn't matter and only open protocols matter is just marketing in my opinion.
By the way, who wants to bet if Amazon could have built AWS by convincing the company across the lake to change the license terms? AWS happened because of the flexibility of OSS licenses. Innovation accelerates when that flexibility exists. The keyword in the previous sentence is accelerates.
Cloud Opinion: OSS is great, it's awesome and all that, but does is play a role in reducing lock-in risk directly?
Joe: Yes -- again, look at MariaDB. The fact that MySQL was OSS enabled an exit strategy as it was acquired by Oracle.
Krish: Yes. The chances of reducing lock-in is much higher than with proprietary software.
To paraphrase: If the question is 'does OSS = No lock-in, the answer is not necessarily.
But if the question is "can open source reduce lock-in compared to proprietary software?", the answer is "yes, definitely".
InfoQ: What is your parting advice to a company that is about to embark on an effort to add or replace a key technology to their environment? What things should they look out for, what architectural considerations are important, and what should they NOT worry about?
Cloud Opinion: I think this decision should be driven by business and technical reasons first. Once there is a business or technical (for ex: reduce tech debt) justification, understand the costs associated with it. Here are some considerations:
Don't worry about things because someone in the industry says a particular technology creates lock-in - do your own thinking, each business is different. Do not outsource thinking.
- Do you have necessary talent on the team or can you get it through training or hiring?
- How long do you expect this technology to provide value to your business?
- Is there a risk that you will be "forced" to replace this tech? Examples could be company providing tech going out of business, licensing changes, pricing changes, tech becoming expensive to manage, talent shortage, performance degradation etc
- Will this technology force you to make all future tech purchases from a single vendor? If so, is this vendor your "preferred" vendor? have you done detailed due diligence on the vendor? Do you have negotiated agreements in place with the vendor?
Simon: We are in the midst of a period of unprecedented innovation in technologies, each of which threatens to up-end any previous set of IT or application delivery assumptions, including the relevance of technologies and vendors. Who could have predicted, as few as 3 years ago, that the so-called DevOps movement, combined with containerization technologies like Docker, would massively impact plans for next-gen application architecture, cloud choice and the relevance of vendors like AWS vs VMware? Who would have predicted the massive consumer popularity of the Mac, the ascendance of the iPhone and changing consumer access courtesy of apps? How will next-gen technology use-cases like IoT change the landscape yet again?
The impact of rapid innovation on traditional enterprise IT and application delivery teams is enormous:
- Staff are more or less behind the curve and struggle to keep up, or lack the ability to discern lasting technology shifts versus changes of convenience. For example, both public cloud and containerization make your choice of Linux distro rather irrelevant; similarly, adoption of SaaS applications makes virtualization rather irrelevant.
- Vendors are similarly more or less out of date, and therefore inclined to mislead customers as to the applicability of any technology option.
- Employees with skills in the currently hot technology (VMware, AWS, Docker) are bound to be hard to find and retain.
Any one of the above can lead an enterprise to make a poor decision with regard to a vendor or technology base for a critical application, and thence face the unpleasant reality of lock-in.
Given the high rate of technology and even vendor churn, the only way to reason about traditionally acquired vendor products (those that are owned or run on-prem versus offered "as a service") is to recognize and document the limitations of an existing platform that causes lock-in, and contrast those with opportunities offered by a "next-gen" approach, and the cost of attaining it, from a business perspective - all the while keeping the time horizon short. There will definitely be a "new even better technology" on a time-scale shorter than ever, so many choices may be tactically driven.
Many traditional on-prem, owned/licensed IT capabilities and enterprise applications are moving to SaaS. Adopting a SaaS provider is a strategic, long-term, sticky solution — in other words presents a significant risk of lock in. Though the SaaS category offers many massive benefits, be aware that any SaaS app that you have used for a while will possess a large amount of enterprise data and hence be very difficult to leave. Evaluate such apps based on the immediate benefits and potential long term costs.
Joe: I'm not sure what to add to the previous responses.
I agree that we are in the middle of a technology refresh. It is tempting to say that if we wait a bit the clear winners will emerge and we'll have clarity. But that was what folks thought about the migration to Cloud. Here we are and the Cloud migration isn't a done deal and a whole new set of ideas are emerging.
My advice (as someone that has made these platforms more than used them) is to make sure you are happy with a piece of technology as it is today. There are exciting things on the horizon but with many of them they are only appropriate for early adopters who have the patience and staff to ride that wave. Make decisions based on a sober look at your abilities and the state of any particular technology.
With respect to lock-in, it is worth "playing the movie" about (a) how likely it is that you'll have to abandon a technology or vendor and (b) how expensive it will be to transition. Weigh that risk against the value that you are looking to get out of the decision. Obviously all of this is dependent on both the business constraints for any company along with a healthy amount of guessing.
I think that many good technology leaders do this intuitively when making decisions. I'd love to see a better decision framework and terminology to talk about this risk so that these decisions can be approached more systematically. In other words, this conversation isn't over.
Krish: Don't pick convenience over the flexibility to evolve. It has worked well in the legacy world where technology refresh came once or twice in a decade but not any more. Today's IT is more of a "living IT" which is constantly evolving. #BeSmart in how you plot your IT strategy today.
Some takeaways
Of course, keeping the flexibility to evolve has a cost (which is not high like in the past) but not having that flexibility also has a cost in the immediate future. #BeSmart in how you handle these needs.
- Your IT should be a loosely coupled system. Not only your application components should be loosely coupled but also the interface with deployment platforms. For example, if you are deploying your application on a platform, it is convenient to use the database service that comes bundled with the platform but the data gravity forces might hold you up and not let your IT evolve fast enough to meet the needs of today's rapidly changing world. Make sure you have a plan to have critical components of your application like data to be portable. If tight coupling cannot be avoided, at least make sure it has multi-cloud support.
- On the application front, embrace Microservices architecture wherever applicable. Microservices offers the flexibility to evolve rapidly than the monoliths. You cannot move everything to a more Microservices oriented architecture all at once but it is time to start moving some of them.
- Make sure your processes are abstracted away from tight coupling. For example, you should not be forced to pick a specific deployment platform or tool just because you want to embrace DevOps. There are ways you can abstract away your processes without incurring a cost or lock-in.
- The need to avoid lock-in requires a cultural change. It requires a mindshift from the legacy thinking which picks convenience over lock-in risks. Make your decision makers understand that your IT is going to be in a state of continuous flux and the pressure to evolve continuously. Make them understand this change. Make your executives understand that they need to forego their short term thinking and embrace the "living IT" model.
About the Panelists
Joe Beda is currently looking for his next job as an Entrepreneur in Residence with Accel Partners. He is an advisor to CoreOS and Shippable. He has been a professional software engineer for 18 years. Over his career at Google and Microsoft, he has built browsers, designed graphics APIs, connected to the telephone system, and optimized ads. Over the past six years, he started, managed and launched Google Compute Engine and helped to create, motivate and launch the Kubernetes project. Joe's current pet project is to make it easy to secure service to service communication with an open project called SPIFFE. Joe holds a B.S. from Harvey Mudd College in Claremont, California. He lives in Seattle with his wife, a physician, and his two children.
Krish Subramanian is currently SVP Products and Strategy at CloudMunch. He is also an advisor to startups in the cloud computing and big data space. He is a strong advocate of modern enterprise model which helps organization embrace flexibility as the mantra.
Simon Crosby is the co–founder and CTO of Bromium. Previously, he was the co-founder and CTO of XenSource prior to its acquisition by Citrix. He then served as the CTO of the Virtualization and Management Division at Citrix. Previously, Simon was a principal engineer at Intel, where he led strategic research in distributed autonomic computing, platform security and trust. He was also the founder of CPlane, a network-optimization software vendor. Prior to CPlane, Simon was a tenured faculty member at the University of Cambridge, where he led research on network performance and control, and multimedia operating systems. In 2007, Simon was awarded a coveted spot as one of InfoWorld’s Top 25 CTOs.
Cloud Opinion is a parody account that delivers commentary on the software industry. Find Cloud Opinion on Twitter and via blog.
With the fast-pace of cloud changes (new services, providers entering and exiting), cloud lock-in remains a popular refrain. But what does it mean, and how can you ensure you're maximizing your cloud investment while keeping portability in mind?
This InfoQ article is part of the series "Cloud and Lock-in". You can subscribe to receive notifications via RSS.