Public Cloud Computing is NOT For Everyone

December 2, 2010

Obligatory picture of random cloud

Without pointing any fingers, there seems to be a persistent refrain from some public cloud computing proponents that says, ‘If you are running your own IT, then you are doing it wrong’. This attitude fails to account for the magnitude and value of many legacy investments in people, process, and technology. It ignores the many challenges and risks posed by migrating enterprise IT to public cloud service providers.

I have no doubt that many organizations will continue to run their own IT, even as they also adopt public cloud services – for very good reasons. At the same time, many will migrate their entire IT environment, wholesale, to public cloud services. I do not see this as even slightly contentious.

In fact, my colleague, Gregor Petri (@GregorPetri) argues very well that this should not even be a debate. I cannot disagree – and in an ideal world I would not be talking about it either – but there needs to be a realistic balance to the ‘private IT is doing it wrong’ crowd.

After all, try telling most Fortune 1000 CIOs that they should tip their entire multi-data center IT investment, along with their entire end-user IT investment, into the dump and put it all on Amazon (like Netflix did). I bet they will laugh in your face before they kick you out of their office and get back to doing real work.

They have a sunk cost in existing IT investments – not to mention a valuable investment in people who know the environment and the business they are supporting. This is also supported by a vast array of IT and business processes that actually make the business more efficient and effective.

Public cloud might be logical for most smaller businesses, new businesses, or new applications like Netflix streaming video service, but for large enterprises, completely abandoning many millions of dollars of paid-for equipment, and an immeasurable amount of process and skill investment, is frequently unjustifiable. As much as they might want to get rid of internal IT, for large enterprises especially, it simply will not make sense – financially, or to the business.

“To start with, there is the massive cost of rewriting all the existing applications”

To start with, there is the massive cost of rewriting all the existing applications from mainframe, UNIX, i5/OS, Unisys, and NonStop (at least) to run on commodity servers and infrastructures. Of course, this is not just a porting exercise. Migrating to public cloud would require completely new architectures for most enterprise applications, not least to accommodate the higher impact (though not necessarily greater frequency) of the downtime endemic in cloud computing. Moreover, given the typically high utilization, performance, and throughput of most of these ‘legacy’ platforms, the cost benefit of migration to commodity systems is questionable at best.

Deploying all new applications to the cloud and just letting legacy applications die through attrition will not allow wholesale migration to public cloud providers either. Some enterprises have applications that are 10, 20, or even 30 years old and still running critical workloads. Today’s crucial applications are going to be around – and running on private ‘legacy’ systems – for a long, long time.

In many cases, regardless of financial factors, it is not even desirable to move enterprise IT into the cloud. Despite all the failings – and they are typically legion – of existing investments in ‘legacy’ people, process, and technology, they frequently do still deliver substantial value to the business. Moreover, they have baked-in an irreplaceably deep level of experience, commitment, and understanding of the core business. They don’t treat all workloads as equal, and do prioritize the most important services in their portfolios.

Commodity cloud providers on the other hand treat all workloads – and all customers – in a shared environment as commodities. They do not provide the special treatment that some workloads really do require – such as some compliance-bound, high-revenue, or non-stop workloads. They do not provide time or event-based reactions to changing business priorities. They do not make sure to allocate the first servers that come up after a system-wide outage to high-priority workloads based on business policy. I talk to CIOs all the time who are laser-focused on aligning IT to the business; if a public cloud provider doesn’t even understand their business priorities, let alone prioritize them, then that goal is difficult, if not impossible, to reach.

“there is no guarantee that your cloud service provider will not just pull the plug on ‘your’ IT service”

This commodity attitude can be even more deleterious when a business (or their website) comes under attack – such as directly in a DDOS or similar attack, in a public relations campaign, or by a sovereign government – for supporting a controversial cause, providing some information, or otherwise becoming unpopular with some group – private or public. When that happens there is no guarantee that your cloud service provider will not just pull the plug on ‘your’ IT service for fear of public backlash, or under pressure from some government force, covert or otherwise (as recently happened with Wikileaks and with Florida pastor Terry Jones). They may even just ‘bump’ one business workload for another, simply because they have oversold capacity – a common practice in all sorts of shared service industries (including transportation, telecommunications, utilities, banking, and Web hosting).

For cloud providers, each supported IT service is just part of their revenue. If hosting any given IT service is not profitable to them (and/or causes them a public relations or legal problem), whether they cancel your contract is simply an ROI calculation. They are not going to fight their customers’ legal battles; they are not going to stand up to a autocratic (or even democratic) government; they are not going to risk their whole business for the sake of one customer. If the service provider does pull the plug, the business is likely to be left not only without an IT infrastructure, but possibly without an offsite backup to restore continuity on another provider – assuming another host will even take them on.

For many enterprises then, moving their private IT to public cloud service providers would not just add cost, but also add additional management burdens, compliance issues, security threats, and business risks. For enterprises that can operate within their own scalable and dynamic data center, public cloud is not ‘your mess for less’; it is ‘more mess for more’.

Moreover, in some cases there may not even be available (or possible?) cloud computing solutions – for example, in low (or no) bandwidth environments, delivering POS or ATM infrastructure, hardware-dependent back-office systems, or high-volume distributed end user computing.

“There is no reason why private IT cannot gain at least some, if not all, of the benefits of public cloud”

However, there is no reason why private IT cannot gain at least some, if not all, of the benefits of public cloud. Most large enterprises can, will, and should use virtualization, automation, and service management to build elastic resource pools, allocate them to fixed and variable service requirements, and effectively deliver on-demand computing.

With continuous capacity management, integrated with performance monitoring, provisioning, and configuration management, this is possible even without over-investing in spare capacity. Even though they don’t actually have ‘unlimited’ compute resources, large well-managed private IT systems can appear infinitely scalable, at least as much as cloud service providers can. It would actually be interesting to compare the IT resources of major global enterprises with Amazon and other service providers. I would bet that enterprises like Wal-Mart, Citigroup, General Electric, etc. (not to mention many governments) actually have more compute resources to allocate in a dynamic cloud than most of the supposedly ‘infinite’ cloud providers.

With highly automated IT solutions, enterprises like Qualcomm have already delivered these benefits with their own private internal cloud. Others have been so successful they are now public cloud providers themselves – like Telstra Australia and Verizon. In fact, Gartner is actually predicting that by 2015, 20 percent of non-IT Global 500 companies will be cloud service providers. Moreover, there is even evidence that over time, investment in private IT infrastructure is actually more cost-effective than outsourcing it to external cloud providers, especially for larger enterprises.

I am certainly not saying enterprises should avoid public cloud computing entirely. That is just as absurd as the converse. Public cloud computing provides incredible opportunity, especially for small and mid-sized businesses, but also for enterprises. Without doubt, many workloads absolutely should be relocated to public cloud providers; some businesses probably are doing it wrong by running any IT of their own.

“The long run is a misleading guide to current affairs. In the long run we are all dead.”

And perhaps in 20 years all these problems will  be solved. But I do not think it is very useful to rely too much on what might happen ‘in the long run’. It is an interesting exercise, and certainly can help inform long-term strategy, but as Keynes famously said,”The long run is a misleading guide to current affairs. In the long run we are all dead.” Especially predicting some hypothetical future state where all problems are solved is just a bit too convenient.

However, framing cloud computing as ‘all or nothing’ is a false dichotomy, because there is a realistic and hugely popular third option: a hybrid cloud. Indeed, most real-life CIOs are actually planning or deploying this model today, where both internal and external IT are combined in the best possible ways to drive business value. Most enterprise CIOs running their own IT are also engaging in both evolutionary and revolutionary approaches to cloud; managing a hybrid supply chain of public, private, and hybrid IT; and delivering a complex mix of IT services.

They are not ‘doing it wrong’ – they are doing what they need to do, with what they have, to make their businesses successful.

Tags: , , , , , , , , , , , , ,

6 Responses to Public Cloud Computing is NOT For Everyone

  1. December 3, 2010 at 14:36

    Hi Andy,

    I am really pleased all this debate has come to the surface over the last couple of days and I want to write a detailed response to some of the comments being made. I agree and disagree with your false dichotomy comment and feel excited this has come out -hopefully some of thhe pundits will be able to express what it is that th naysayere have been missing. It is not just about cost saving and flexible scaling. If it were then I would have to agree insofar as the larger and more savvy the org thenless differeent the two are. But this is not the key point around PRIVATE bs PUBLIC in my opinion. The key difference is the capacity to apply Metcalfe’s law to the data as a resltof it’s interoperability.

    No time now, but soon I am going to have to expand these thoughts and explain.

    Currently there are two camps and they are both talking at cross purposes thinking thhe disagree when they actually are simply talking about two different things.

    • December 4, 2010 at 17:21

      Hi Alan, thanks for the comment.

      I have to say, you have me intrigued to read your full response and thoughts. I had not considered it before, but I can see that the notion of applying Metcalfe’s Law to business data in the cloud has potentially huge impact. The business value of the data could be multiplied exponentially, not just for the owner but for many others as well, as data diversity and interoperability would create a vast array of entirely new data sets.

      This in turn would have very far-reaching implications not just on cloud-based BI and what that could mean to business execution; but also to current concepts of data (and metadata) privacy, anonimisation, protection, federation, etc.

      I am just shooting from the hip – but it sounds like you have thought this through a lot more. I would love to see what you are thinking – please let me know when you post anything on this. Fascinating ideas!

  2. December 2, 2010 at 15:08

    This is what Hybrid Cloud really looks like :-)

    Netflix has divided it’s IT into legacy (which has indeed been left in the datacenters) and strategic apps (where all the investment is in cloud).
    The question for IT is how you should be balancing investments of time and money in legacy or in strategic apps. The more focus you put on strategic apps, the bigger the competitive advantage, and since the public cloud agility and cost is much better you get more back for your investment.

    The strategic apps are exiting the datacenter to make room for the legacy to continue to grow with minimal investment. The problem with hybrid cloud (roman riding) is all about data access and synchronization, and the cost and time spent solving that problem often turns out to be bigger than the cost and time to implement a pure public cloud hosted solution.

    • December 3, 2010 at 03:08

      Hey Adrian, thanks for commenting. Nice pic! But great points too. Optimizing the balance of investment in time and money is an important way to look at the issue. The problem of synchronizing big data with limited pipes is another good point.

      Interesting that even a company like Netflix, the poster for public cloud success, and less than 15 years old, is still running a bunch of ‘legacy’ systems on-premise. It’s all about getting the right balance, I think.

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.