Why is there no TCO in Public Cloud?

Posted:

I think anyone reading this will agree when I say that the interest and adoption of public hyper-scale clouds such as Amazon AWS and Microsoft Azure, have been nothing short of atomic in the last 12 months. With the conversations we’re having and projects we are working on, I can’t actually name an industry or sector that isn’t in the process of adopting one of the platforms in a big way anymore.

Why wouldn’t they? Public Clouds offer a vast number of benefits from both a technical and commercial perspective – whether you are a digital business looking to benefit from utility grade standardisation and extreme geographical reach for your next generation apps, or a large enterprise, looking to reduce the burden on the various teams stuck in the dreaded 3/5-year life-cycle loop of modernising and replacing hardware; shifting to a true Public Cloud platform usually fits the bill.The latter of the above two scenarios however, is often where I see the mass confusion around what Public Cloud is really about; with organisations often requesting a detailed TCO analysis of what they have today (some form of owned / leased equipment) vs. applications running in a Public Cloud platform (Azure for example) and expecting to see a tangible cost benefit somewhere.

For anyone who’s thinking of doing this – believe me when I tell you that you won’t ever see this benefit in black and white. It will, without doubt come up considerably more expensive, sometimes by as much as 150% in my experience, but that’s OK because there is no TCO in Public Cloud. Just to be clear before I go any further, I’m not for one second saying that organisations should disregard Public Cloud if they want to save money – I’m a huge advocate of the platforms and I’m confident that if it is done correctly, migrating to a Public Cloud platform will provide you with overall savings and a cost benefit.

Back to the case in point, there are several reasons why its unfeasible to expect a comparative TCO to stack up in this way:

  1. Public Clouds always get cheaper over time – it’s not linear and impossible to forecast over any period of time that allows for a comparative analysis of the two scenarios.
  2. It’s a completely different accounting model (CAPEX vs. OPEX) – this is a huge topic in itself but nonetheless, it’s very relevant here.
  3. The procurement, lifecycle management, maintenance and all other intangible costs still exist – they are now tangible and accounted for through the platform fees.
  4. The operating model is completely different – like attempting to compare apples for pears.

The whole process is designed to lean towards a traditional architecture model so even if you try and run an analysis this way, you’ll never get to a ‘cost benefit’ with Public Cloud. So why are so many organisations still moving in this direction?

I’m not a big fan of analogies, but as I hear so many people refer to it, I figured it was a good point to build upon and help answer the above question. When you’re deciding on your utilities (Gas / Electric) provider do you sit and develop a full 5 year TCO, or look around for the best blend of standing charges/consumption rates and service levels? Public Cloud is no different, you are looking for the best rates, service level and capability – it couldn’t possibly work any other way. Unless you start to view public cloud as a utility conceptually, you will never be successful with adopting it and most importantly, finding the most cost effective way of consuming it.

Like with any other utility, saving money comes later – with utilities you perform an analysis to see whether it’s cheaper to buy LED light bulbs that cost more to buy but are cheaper to run, or stick with fluorescents that may cost more to run, but don’t cost as much to buy – you might even decide to wait until the price of LED’s come down to a certain level to make the figures stack up, but one thing you don’t do is try and work out if it’s cheaper to build a solar farm or use a utility provider. The same applies to Public Cloud, you run analysis of 10 small instances on 2-year-old CPU’s against 4 medium instances of the latest CPU’s to see which is best, or like swapping out timer switches for your heating, you run an analysis to work out the benefit of offering applications on a 9×5 basis against a 24×7. Your content with the fact that the price of resources are what they are, and you’re looking for ways to consume less with the same result.

The irony of this is that as consumers, we’ve waited years for technologies like Smart Meters to give us the visibility on where our energy goes, or for Smart Thermostats that automate the turning off of our heating to reach the level of maturity and reliability that’s available today. Tooling and services exist today in the Public Cloud world to support this mind-set, but the majority of organisations aren’t yet anywhere near that kind of thinking. The analogy may be stretching a bit too far now, but hopefully you get the point – Public Cloud isn’t a technology shift, it’s an economical shift…. You’re not managing for availability and stability anymore, you’re managing for cost and efficiency. It’s not supposed to be cheaper, it’s supposed to be easier, more agile and if done right, far more efficient.

All this however, is actually easier said than done – as a Cloud Architect at ANS, I probably spend almost half of my time helping organisations already in the cloud become more efficient, nearly always reducing the platform spend by anywhere from 20 – 50%. That’s right, it’s not a typo – up to 50% of platform consumption is often unnecessary and with the right support, very easy to reclaim.

So that people is cloud economics, something that if you get right blows the need for any TCO analysis straight out of the water and more importantly, gets you thinking in a way that’s required for Public Cloud to stack up financially.