In the pre-cloud days, developers who wanted to build an application needed to think a lot about servers. They needed to budget for them, plan for them, connect them, power them and house them. They had to buy or lease the servers, the power supplies, cabling and cooling - and then set it all up in their datacenter or in a colocation facility.
Over time, the colocation facilities began taking out many parts of the equation – providing racks, power, Internet access and other key resources. Even so, dealing with provisioning, clustering, and maintaining servers required spending lots of money (capital expenditures, power, internet, cooling, security), tons of time and detailed planning (contingency, develop/test/produce, site growth, and so on).
Enter The Cloud
In the last two years we’ve seen a seismic shift in computing. It’s no longer "Why cloud?" or even "How cloud?" Infrastructure-as-a-Service (IaaS) has delivered dramatic improvement on cost, agility, scalability – and yes, with the right architecture, reliability. The cloud has simple removed a significant chunk of work around managing and provisioning servers.