In my last post (The future of cloud computing – an army of monkeys?) I took exception to the concept of a ‘private cloud’, as presented by a number of hardware & software vendors who would have us believe that it is ‘any large, intelligent and/or reliable cluster‘ (typically while trying to sell the same). I suggested that we should call a spade a spade (a cluster a cluster), but I didn’t go into a great deal of detail as to why it is impossible to emulate some of the key features of cloud computing in-house, most notably, peak-load engineering, but more generally unleashing the true potential of a global service oriented architecture.
Let’s use a concrete [counter]example:
I spend a lot of time in a small town dominated by two very large companies working on difficult environmental problems that, for the sake of the argument, require a bunch of computing horsepower for modeling every other day. Yesterday $CUSTOMER would have called $VENDOR to ask for $CLUSTER, and the vendor(s) (quite possibly the same vendor for both) would implement two identical datacenters, both engineered for the (identical) peak load of these two companies (that is, exactly double what is required).
In this simple example they could have come to some time sharing arrangement and dealt with the same security problems grids deal with (remember these guys are competitors working on the same problems) but that only works for a very small number of clients, and often not at all.
A cloud provider on the other hand would have been able to securely share the hardware between these two companies, while abstracting the capex, running costs, etc. The customers would pay for the ‘overhead’ of the cloud provider, but said provider would only have to buy half the hardware, half the staff, half the real estate, etc. and the customer wouldn’t have to worry about building and maintaining it so they still win.
Another very significant win for their being ‘in the cloud’ is that once they’ve done their modeling they can easily build on top of other cloud services. For example they can slap the results straight onto a map hosted by another provider without having to buy /very/ expensive mapping software, middleware, integration and of course the maps themselves (Actually since the results are not public they still typically have to pay something, but a fraction of what it would have cost before).
In this case they are a ‘client’ of The Cloud, a second rate citzen if you like, but then by pumping the results back out to the stakeholders (employees, clients, suppliers and/or other cloud users) they can become a full participant, in much the same way that participation is a key characteristic of Web 2.0 (itself a key component of cloud computing).
A purist view? Perhaps, however it’s becoming increasingly clear that this is how computing will look in the (not too distant) future and the technology itself is a reality today. Before you sign the purchase order for that shiny new ‘private cloud’, do yourself a favour and be sure you and your vendor are on the same wavelength by challenging them to explain why their cloud is not a cluster, without resorting to saying “It’s the vibe…“.
At the end of the day these companies can get on with making widgets with a reduced bottom line and without having to worry about becoming computing experts (which is the equivalent of building your own power station to keep the lights on). The increases in agility and efficiency will give these early adopters a significant advantage over their competitors who remain shackled to legacy systems.
Updated: 5 August 2008 10:00 UTC This on the other hand is a ‘private cloud‘… don’t forget to check out The Flight of the Conchords‘ hilarious but potentially NSFW ‘Business Time’ YouTube video at the bottom.