Business Cases and Cloud Computing
Tags [ app engine, AWS, business case, cloud computing, PaaS, TCO ]
I just read a very interesting article by Gregory Ness on seekingalpha.com that talks about some of the technology trends behind cloud computing. One key quote:
Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise’s ability to monetize past investments. Increasingly complex networks are requiring escalating rates of manual intervention. This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.
One other cited link is to an IDC study that includes the following graph:
Note that staffing accounts for 60% of the cost of maintaining a server over its lifetime. Cloud infrastructure services like Amazon EC2 would really only save an enterprise data center on hardware setup / software install costs, which are probably, in terms of staffing, a small amount of staff time for a given server. Actually administering the server once it is running is really the bulk of the cost, and that won’t go away on EC2 – you’ll still need operations staff to provision/image cloud infrastructure. EC2 makes sense if the economies of scale of AWS are such that they can achieve a lower operational cost for that other 40% than you can, or if there is a business / time-to-market value proposition that makes sense in being able to provision hardware on EC2 more rapidly than we can acquire and install hardware yourself.
Given the huge economy of scale that the large cloud providers have–tens of thousands of servers, it is going to be hard to get your costs for that 40% lower than what they can achieve with their existing infrastructure automation and ability to purchase hardware in bulk, especially for a startup company whose hardware needs are initially modest. Let’s guess that there’s a 33% markup on cost for EC2, so when you are getting charged $0.10 per CPU hour, it’s really only costing them $0.075. Let’s assume a 75% experience curve on infrastructure (meaning, once you have doubled the number of servers you have deployed, the last server costs only 75% of what the halfway point was).
By one estimate, Amazon has 30,000 servers. Now let’s work backward (1⁄0.75 = 1.33): at 15,000 servers, their cost was $0.075 * 1.33 = $0.9975. At 7500 servers, their marginal cost was $0.9975 * 1.33 = $0.13. In other words, you’d have to be planning to deploy 15,000 servers in order to have a hope of getting your marginal cost under what they’ll charge you retail.
(I think this is actually a conservative estimate: the experience/learning curve for infrastructure deployment is probably steeper than 75% due to existing hierarchical deployment patterns and a product (provisioned servers) that lends itself well to automation. Also, due to the high barrier to entry for cloud computing in terms of number of servers you need to be competitive, they can probably get away with charging an even higher markup).
One corollary of this is that if you are currently running a data center with far fewer servers (i.e. the hardware is a sunk cost), you might actually be better off turning your data center off and leasing from Amazon. Now of course, there are some things (customer credit card data, extremely sensitive business information) that you just wouldn’t be willing to host somewhere outside your own data center. But that’s probably a very specific set of data–host that stuff and lease the rest in the cloud, particularly if you can get adequate SLAs from your cloud vendor.
So that deals with the 40% of the TCO for a server that isn’t staffing. How do you cut costs on the other 60%?
You won’t really be able to make a dent in that 60% until you get not just to fully automated infrastructure provisioning, but until you get to fully automated software deployment and provisioning. This is not possible until you get to standardized computing platforms with specific functionality that are scale-on-demand, like Akamai NetStorage, Amazon S3 / EBS / SQS / SimpleDB, and Google AppEngine. These are known as “Platform-as-a-Service” (PaaS) offerings.
There’s a similar experience curve argument here: you could spend internal development time here to set up some kind of application deployment framework, but you’d essentially have to be willing to build and deploy within orders of magnitude the number of different apps as the Google App Engine team in order to get your costs under what Google will charge you. Unless you are in the business of directly competing with them in the PaaS market, you might as well buy from them and focus your energy on providing your unique business value, not software or hardware infrastructure. [Editor’s note: this was something Matt Stevens said to me a while ago, and it wasn’t until I went through the mental exercise of writing this article that I actually got it].
Yesterday I implemented (not prototyped) a service in Google App Engine in about 6 hours that would cost around $400 per month (according to their recent pricing announcements) if projected usage were more than double what it is now. I estimate this would require at least 10 database servers just to host the data in a scalable, performant fashion, nevermind the REST data interface (webnodes) sitting in front of it. On Amazon EC2, that’d be $720 per month on your small instances (assuming those were even beefy enough), and per the experience curve argument above, it’s probably way more than that in our data center. And that’s not counting any of the reliability/load balancing infrastructure.
So my open question is: how, as a software developer, can you justify not building your app in one of these cloud frameworks?