How to "Hack" Your Elastic Infrastructure
There are many ways to unlock efficiencies in your IT infrastructure. But dynamic cost-saving isn’t going to happen all by itself.
The promise of the cloud-based and elastic environments is to keep IT service costs perfectly in-line with their expanding and contracting use. In this ideal pay-to-play model, companies can keep costs low most of the time, and only dial up their server capacity when traffic surges demand it.
In practice, however, this self-correcting system simply doesn’t exist. There’s a reason that companies are heavily rewarded for having concrete data, solid controls, and seamless automation in elastic and virtualized environments. Without those things, there’s unfortunately no guarantee that you won’t be spending far more than you need to on any given day. The truth is that these systems require additional oversight in order to run optimally — the cloud doesn’t simply save you as much money as possible of its own accord.
To get the most out of your elastic infrastructure, you have to “hack” it. That can be as simple or as difficult as knowing when to be elastic and when to set hard limits, because it takes both the right tools and the expertise to see what’s necessary, what’s cost-efficient, and what level of service is worth spending precious IT dollars on.
It’s Hard to Stay Small
Many companies that move their IT infrastructures to cloud environments get punished for making intuitive choices that look great on paper, but end up costing them substantially more than necessary.
For example, we routinely encounter clients who want to frontload their systems to prepare for worst-case demand scenarios. Setting up a Windows application, for instance, they say, “We need to reserve some SQL servers of such-and-such a size to handle demand spikes, just in case.” If they need the extra capacity, they figure, it’s there to protect them from disaster.
Unfortunately, what often happens is that, once they install the server, they find that they really only need to use it for two hours a day, yet are paying to reserve it 24/7. Expecting to pay a few hundred bucks per month, they’re stupefied when they’re handed a $2,800 bill.
The “hack” is being able dynamically adjust your server usage so that you pay only for the capacity that you use. Instead of constantly running the server, the client should pursue one of two options:
- Turn the server on for two hours per day.
- Run systems on a smaller, less expensive machine by default. Then, transition to a larger machine for those two hours.
In practice, IT managers can’t depend on their infrastructures to adjust automatically, despite their fully elastic capabilities.
Capacity Management Is No Hack Job
A more fully-flushed solution looks like this: by default, run systems on a medium-sized virtual “node” (or cloud instance), which might cost $0.40 per minute. Then, the IT manager sets thresholds so that systems automatically revert to the $4.50-a-minute big guns — a 16 GPU node with 64 Gigs of RAM, say — when need capacity calls. The reality is that, without this kind of dynamic, proactive control, a large, cloud-based SQL server can easily cost a company six times as much as it would sitting in a physical datacenter.
This hack (which is by now an open secret), however, requires more than simply turning your systems off and on. Along with using the right capacity management and monitoring tools, it entails tangibly committing to agile development within your organization. The result, of course, is a highly flexible infrastructure that automates data-driven efficiency.
Companies can, and should, retain every benefit presented by cloud-based and virtual environments. With a well-structured approach, companies can reliably integrate and automate the hacking of their infrastructures.
To learn more, read the TeamQuest White Paper, DevOps Development: Keeping the Lights on Activities.
(Main image credit: Wikimedia)