Without pointing any fingers, there seems to be a persistent refrain from some public cloud computing proponents that says, ‘If you are running your own IT, then you are doing it wrong’. This attitude fails to account for the magnitude and value of many legacy investments in people, process, and technology. It ignores the many challenges and risks posed by migrating enterprise IT to public cloud service providers.
I have no doubt that many organizations will continue to run their own IT, even as they also adopt public cloud services – for very good reasons. At the same time, many will migrate their entire IT environment, wholesale, to public cloud services. I do not see this as even slightly contentious.
In fact, my colleague, Gregor Petri (@GregorPetri) argues very well that this should not even be a debate. I cannot disagree – and in an ideal world I would not be talking about it either – but there needs to be a realistic balance to the ‘private IT is doing it wrong’ crowd.
After all, try telling most Fortune 1000 CIOs that they should tip their entire multi-data center IT investment, along with their entire end-user IT investment, into the dump and put it all on Amazon (like Netflix did). I bet they will laugh in your face before they kick you out of their office and get back to doing real work.
They have a sunk cost in existing IT investments – not to mention a valuable investment in people who know the environment and the business they are supporting. This is also supported by a vast array of IT and business processes that actually make the business more efficient and effective.
Public cloud might be logical for most smaller businesses, new businesses, or new applications like Netflix streaming video service, but for large enterprises, completely abandoning many millions of dollars of paid-for equipment, and an immeasurable amount of process and skill investment, is frequently unjustifiable. As much as they might want to get rid of internal IT, for large enterprises especially, it simply will not make sense – financially, or to the business.
To start with, there is the massive cost of rewriting all the existing applications from mainframe, UNIX, i5/OS, Unisys, and NonStop (at least) to run on commodity servers and infrastructures. Of course, this is not just a porting exercise. Migrating to public cloud would require completely new architectures for most enterprise applications, not least to accommodate the higher impact (though not necessarily greater frequency) of the downtime endemic in cloud computing. Moreover, given the typically high utilization, performance, and throughput of most of these ‘legacy’ platforms, the cost benefit of migration to commodity systems is questionable at best.
Deploying all new applications to the cloud and just letting legacy applications die through attrition will not allow wholesale migration to public cloud providers either. Some enterprises have applications that are 10, 20, or even 30 years old and still running critical workloads. Today’s crucial applications are going to be around – and running on private ‘legacy’ systems – for a long, long time.
In many cases, regardless of financial factors, it is not even desirable to move enterprise IT into the cloud. Despite all the failings – and they are typically legion – of existing investments in ‘legacy’ people, process, and technology, they frequently do still deliver substantial value to the business. Moreover, they have baked-in an irreplaceably deep level of experience, commitment, and understanding of the core business. They don’t treat all workloads as equal, and do prioritize the most important services in their portfolios.
Commodity cloud providers on the other hand treat all workloads – and all customers – in a shared environment as commodities. They do not provide the special treatment that some workloads really do require – such as some compliance-bound, high-revenue, or non-stop workloads. They do not provide time or event-based reactions to changing business priorities. They do not make sure to allocate the first servers that come up after a system-wide outage to high-priority workloads based on business policy. I talk to CIOs all the time who are laser-focused on aligning IT to the business; if a public cloud provider doesn’t even understand their business priorities, let alone prioritize them, then that goal is difficult, if not impossible, to reach.
This commodity attitude can be even more deleterious when a business (or their website) comes under attack – such as directly in a DDOS or similar attack, in a public relations campaign, or by a sovereign government – for supporting a controversial cause, providing some information, or otherwise becoming unpopular with some group – private or public. When that happens there is no guarantee that your cloud service provider will not just pull the plug on ‘your’ IT service for fear of public backlash, or under pressure from some government force, covert or otherwise (as recently happened with Wikileaks and with Florida pastor Terry Jones). They may even just ‘bump’ one business workload for another, simply because they have oversold capacity – a common practice in all sorts of shared service industries (including transportation, telecommunications, utilities, banking, and Web hosting).
For cloud providers, each supported IT service is just part of their revenue. If hosting any given IT service is not profitable to them (and/or causes them a public relations or legal problem), whether they cancel your contract is simply an ROI calculation. They are not going to fight their customers’ legal battles; they are not going to stand up to a autocratic (or even democratic) government; they are not going to risk their whole business for the sake of one customer. If the service provider does pull the plug, the business is likely to be left not only without an IT infrastructure, but possibly without an offsite backup to restore continuity on another provider – assuming another host will even take them on.
For many enterprises then, moving their private IT to public cloud service providers would not just add cost, but also add additional management burdens, compliance issues, security threats, and business risks. For enterprises that can operate within their own scalable and dynamic data center, public cloud is not ‘your mess for less’; it is ‘more mess for more’.
Moreover, in some cases there may not even be available (or possible?) cloud computing solutions – for example, in low (or no) bandwidth environments, delivering POS or ATM infrastructure, hardware-dependent back-office systems, or high-volume distributed end user computing.
However, there is no reason why private IT cannot gain at least some, if not all, of the benefits of public cloud. Most large enterprises can, will, and should use virtualization, automation, and service management to build elastic resource pools, allocate them to fixed and variable service requirements, and effectively deliver on-demand computing.
With continuous capacity management, integrated with performance monitoring, provisioning, and configuration management, this is possible even without over-investing in spare capacity. Even though they don’t actually have ‘unlimited’ compute resources, large well-managed private IT systems can appear infinitely scalable, at least as much as cloud service providers can. It would actually be interesting to compare the IT resources of major global enterprises with Amazon and other service providers. I would bet that enterprises like Wal-Mart, Citigroup, General Electric, etc. (not to mention many governments) actually have more compute resources to allocate in a dynamic cloud than most of the supposedly ‘infinite’ cloud providers.
With highly automated IT solutions, enterprises like Qualcomm have already delivered these benefits with their own private internal cloud. Others have been so successful they are now public cloud providers themselves – like Telstra Australia and Verizon. In fact, Gartner is actually predicting that by 2015, 20 percent of non-IT Global 500 companies will be cloud service providers. Moreover, there is even evidence that over time, investment in private IT infrastructure is actually more cost-effective than outsourcing it to external cloud providers, especially for larger enterprises.
I am certainly not saying enterprises should avoid public cloud computing entirely. That is just as absurd as the converse. Public cloud computing provides incredible opportunity, especially for small and mid-sized businesses, but also for enterprises. Without doubt, many workloads absolutely should be relocated to public cloud providers; some businesses probably are doing it wrong by running any IT of their own.
And perhaps in 20 years all these problems will be solved. But I do not think it is very useful to rely too much on what might happen ‘in the long run’. It is an interesting exercise, and certainly can help inform long-term strategy, but as Keynes famously said,”The long run is a misleading guide to current affairs. In the long run we are all dead.” Especially predicting some hypothetical future state where all problems are solved is just a bit too convenient.
However, framing cloud computing as ‘all or nothing’ is a false dichotomy, because there is a realistic and hugely popular third option: a hybrid cloud. Indeed, most real-life CIOs are actually planning or deploying this model today, where both internal and external IT are combined in the best possible ways to drive business value. Most enterprise CIOs running their own IT are also engaging in both evolutionary and revolutionary approaches to cloud; managing a hybrid supply chain of public, private, and hybrid IT; and delivering a complex mix of IT services.
They are not ‘doing it wrong’ – they are doing what they need to do, with what they have, to make their businesses successful.
6 comments for “Public Cloud Computing is NOT For Everyone”