←back to Articles

An obituary for Infrastructure as a Product (IaaP)

This content is 15 years old and may not reflect reality today nor the author’s current opinion. Please keep its age in mind as you read it.

There’s been an interesting discussion in the Cloud Computing Use Cases group this week following a few people airing grievances about the increasingly problematic term “private cloud”. I thought it would be useful to share my response with you, in which I explain where cloud came from and why it is inappropriate to associate the term “cloud computing” with most (if not all) of the hardware products on the market today.

All is not lost however – where on-site hardware is deployed (and maintained by the provider) in the process of providing a service then the term “cloud computing” may be appropriate. That said, most of what we see in the space today is little more than the evolution of virtualisation, and ultimately box pushing.

Without further ado:

On Sat, Jul 11, 2009 at 2:35 PM, Khürt Williams <[email protected]> wrote:

I am not sure I even under what private cloud means given that the Cloud term was meant to refer to how the public Internet was represented on network diagrams.  If it is inside my firewall then how is it “Cloud”?

Amen. The evolution of virtualisation is NOT cloud computing.

A few decades ago network diagrams necessarily contained every node and link because that was the only way to form a connected graph. Then telcos took over the middle part of it and consumers used a cloud symbol to denote anything they didn’t [need to] care about… they just stuff a packet in one part of the cloud and it would magically appear [most of the time] out of another. Another way of looking at it (in light of the considerable complexity and cost) is “Here be dragons” – same applies today as managing infrastructure is both complex and costly.

Cloud computing is just that same cloud getting bigger, ultimately swallowing the servers and leaving only [part of] the clients on the outside (although with VDI nothing is sacred). Consumers now have the ability to consume computing resources on a utility basis, as they do electricity (you just pay for the connection and then use what you want). Clearly this is going to happen, and probably quicker than you might expect – I admit to being surprised when one of my first cloud consulting clients, Valeo, chose Google Apps for 30,000 users over legacy solutions back in 2007. Early adopters, as usual, will need to manage risk but will be rewarded with significant cost and agility advantages, as well as immunised to an extent against “digital native” competitors.

You can be sure that when Thomas Edison rocked up 125 years or so ago with his electricity grid there were discussions very similar to those that are going on today. With generators (“Electricity as a Product”) you have to buy them, install them, fuel them, maintain them and ultimately replace them, which sustained a booming industry at the time. We all know how those conversations ended… Eastman Kodak is the only company I know of today still running their own coal fired power station (though we still use generators for remote sites and backup – this will likely also be the case with cloud). Everyone else consumes “Electricity as a Service”, paying a relatively insignificant connection fee and then consuming what they need by the kilowatt hour.

What we have today is effectively “Infrastructure as a Product” and what we’ll have tomorrow is “Infrastructure as a Service” (though I prefer the term “Intrastructure Services” and expect it to be “infrastructure” again once we’ve been successful and there is no longer any point in differentiating).

Now if legacy vendors work out how to deliver products as services (for example, by using financing to translate capex into opex and providing a full maintenance and support service) then they may have some claim to the “cloud” moniker, but that’s not what I’m seeing today. Most of the “private cloud” offerings are about hardware, software and services (as was the case in the mainframe era) rather than true utility (per hour) basis. Good luck competing with the likes of Google and Amazon while carrying the on-site handicap – I’m expecting the TCO of “private cloud” setups to average an order of magnitude or so more than their “public” counterparts (that is, $1/hr ala network.com rather than $0.10/hr ala Amazon EC2), irrespective of what McKinsey et al have to say on the subject.

In the context of the use cases, sure on-premises or “internal” cloud rates a mention but the “public/private” nomenclature is problematic for more reasons than I care to list. I personally call it “I can’t believe it’s not cloud”, but that’s not to say I leave it out of proof of concepts and pilots… I’m just careful about managing expectations. Ultimately the user and machine interfaces should be the demarcation point for such offerings and everything on the supplier side (including upfront expense) should be of no concern whatsoever to the user. I consider utility billing and the absence of capex to be absolute requirements for cloud computing and feel this ought to be addressed in any such document – suppliers might, for example, offer the complete solution at $1/hr with a minimum of 150 concurrent instance minimum (~= $100k/month).

Oh and if large enterprises want to try their hands at competing with the likes of Google and Amazon by building their own next generation datacenters then that’s fine by me, though I equate it to wanting to build your own coal-fired power station when you should be focusing on making widgets (and it should in any case be done in an isolated company/business unit). I imagine it won’t be long before shareholders will be able to string up directors for running their own infrastructure, as would be the case if they lost money over an extended outage at their own coal-fired power station when the grid was available.