A couple of weeks ago, the New York Times reported on a Cisco announcement that it would start manufacturing what could best be described as "cloud computing appliances": commodity servers with virtualization software pre-installed. I believe the notion here is that you just rack up enough identical boxes to meet your total computational needs, then virtualize all your applications onto that substrate. This is entirely achievable, too, by the way, as this (mind-blowing, at least for me) demo of 3Tera's AppLogic virtual data center product shows--literal drag and drop, plug and play, connect-the-dots configuration magic.
On Tuesday, William Louth posted an article describing what he calls Activity-Based Costing (ABC)--essentially a profiler for testing a cloud computing implementation. He describes hooks that can be applied around API calls to services like a Google App Engine platform or an Amazon Web Services S3-style service. In turn, when you run a test implementation, you can gather a recording of the API calls you make and get an understanding of how much your implementation will cost in terms of cloud computing charges.
On Monday, Deutsche Telekom announced a spinoff startup company called Zimory which aims to create an online marketplace--the Zimory Public Cloud--for elastic cloud computing resources. Companies with spare computing resources can install an agent, offer a certain level of SLA, and then begin selling their excess capacity. Buyers of resources can follow an online provisioning process similar to that found on Amazon Web Services EC2: select a virtual machine image, select a level of service, provide your credit card, and off you go.
Are you familiar with the economic theory of "experience curves" (also known as "learning curves")? In the case of cloud computing, it explains why it makes sense not only to outsource new data center costs to cloud providers like Amazon Web Services (AWS), but in fact why it may make sense for you to stop operating a data center at all. Experience curves were formalized by the Boston Consulting Group (BCG) and describe how production costs tended to fall in a predictable fashion as the number of units produced increased.
New "cloud computing" vendors like Amazon Web Services, Google App Engine, and others are changing the game for businesses needing to host Internet applications and services. This site will keep readers up-to-date on new developments in the field, while providing the economic and technical background and analysis needed to make critical business decisions. Welcome, and enjoy.
I just read a very interesting article by Gregory Ness on seekingalpha.com that talks about some of the technology trends behind cloud computing. One key quote: Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise’s ability to monetize past investments. Increasingly complex networks are requiring escalating rates of manual intervention. This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.
I have been reading the Richardson and Ruby book RESTful Web Services and recently had an epiphany: if you design a RESTful web site it is also a RESTful web API. In this post I’ll show exactly how that works and how you can use this to rapidly build a prototype of a modern web application. First of all, let’s start with a very simple application concept and build it from the ground up.
You are probably well aware of the need for offsite backups; as a technology professional this is one of the first arrangements I look into for any permanent storage of business information. When I started two years ago for an internal “startup” for a large company, one of the first things we did was set up an SVN repository and then work out an arrangement with an offsite data storage provider.
Metrics are an important part of any development group’s toolset. If we want to continually improve our ability to develop software (through a lean engineering kaizen approach, or simply as a learning organization), then we need to have a way to figure out: what parts of our process need improvement? when we make a change, did it help or hurt? This is where process metrics come into play. I’ll start with my definition of a metric, which is a numerical measurement of something.
“Simplicity is the ultimate sophistication.” –Leonardo da Vinci. “Everything should be made as simple as possible, but no simpler.” –Albert Einstein “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.” –Antoine de Saint-Exupery I’ve written before about the notion of technical debt. In this post, I want to discuss a few specific sources of technical debt that are easy to accrue, particularly in an agile, iteration-based setting.