Don't Be Caught Off Guard by Overprovisioning
While the cloud can respond elastically to your needs, you have to understand those needs with crystalline clarity in order to determine whether you’re actually saving on IT costs.
The cloud can sometimes feel like a true set-it-and-forget-it IT technology; decommission your clunky on-premises servers, migrate your apps to the cloud, reduce your in-house IT talent, and let the algorithms take over, just as the cost savings begin to roll in. Looking at the instant agility and scalability that the cloud provides, it’s easy to see why businesses rarely give those low up-front prices a second thought.
Unfortunately, those companies that really do approach the cloud with a set-it-and-forget it mindset usually find, to their great surprise, that the cloud hasn’t ultimately saved them any money. Even worse, they have significant trouble locating the source of their budget problem (or recognizing that there is one, as the case may be). Like with any IT product, there’s always more to the story: in this case, it’s the story of overprovisioning.
Jeff Kaplan writes in TechTarget that, “One of the ultimate ironies about the… cloud computing marketplace is that these “on-demand” services can easily cost an organization far more than expected.” The truth is that the cloud requires a considerable degree of technical thoughtfulness to match your complex virtual needs with a provider and cloud service that erases them. Most of the time, there’s excess fluff, and that fluff comes at a direct cost to the business.
But as detrimental as these costs can be, it’s hard for organizations to care if they don’t know them. It’s often the case that companies purchase more cloud capacity than services require, but despite their best intentions, the cloud services these companies employ tend to mask those additional costs as pure performance.
How can this happen? For one, cloud services automatically boost your capacity when apps are lagging — that’s the promise of instant elasticity. But if the cloud is constantly compensating for slower swimmers, how can be you certain that an app with fast response times is indeed high-performing, and not an inefficient moocher riding on the wake of sunk cloud costs? The lines can become blurred fairly quickly.
In other cases, companies simply have too loose a grasp on their projected capacity needs. They may purchase larger, more expensive cloud instances than necessary (or rent a server for 24-hour use when it’s only needed for two); miss opportunities for volume discounts by purchasing in a short, term, ad hoc fashion; or simply select an ill-fitting cloud service and spend their money reverse engineering a solution.
These are known problems, which begs the question: why don’t businesses simply avoid them? Simply enough, relatively few in-house IT experts are experienced with the cloud, and even fewer legacy IT systems are designed for it. Generally, if you put a new technology in the hands of experts who were never trained to deal with it, they’ll implement it in a way that makes the most sense to them — and oftentimes, that method will be far from the most efficient possible.
In response to the challenges of the cloud, many organizations are turning to third-party IT management providers to ensure that their cloud spend is aligned as cost-efficiently with their business needs as possible. For instance, TeamQuest’s Vityl suite of capacity management tools helps organizations to confidently identify instances of overprovisioning — whether in physical, cloud, or heterogenous environments — and predict where such costs are likely to be incurred in the future.
It takes proactive management to make truly effective use of the cloud. If this technology is going to respond to IT needs in a way that’s truly elastic, then IT will have to attentively engage with it.
(Image credit: Wikimedia)