Moving Your Cloud Strategy from Theory to Practice
Hybrid clouds remain the most popular cloud configurations among enterprises partially because of their flexibility. One way to take advantage of this flexibility is to use cloud bursting, but doing so without the proper tools can be a risky proposition.
As both the private and public cloud continue to develop and become more efficient, the IT industry has witnessed an explosion of cloud infrastructure options. Enterprises can now choose among multiple cloud offerings from proliferating vendors, and all indications suggest they’re already taking full advantage — RightScale reports that of enterprises with at least 1,000 employees, 58% currently operate within a hybrid cloud environment. At 20%, the utilization of multiple public clouds, or a “multi-cloud,” is the only other configuration that has reached a double-digit proportion of adopters, confirming that hybrid clouds remain the preferred choice of most enterprises.
The question for nearly three of every five enterprises, then, is how best to realize a hybrid cloud strategy that can boost performance and reduce costs. In practice, users, developers, and operations teams all are prone to creating workloads and then neglecting to release the harnessed computing resources, leading to virtual sprawl and skyrocketing expenses.
Auto-scaling and orchestration both have roles to play in striking the proper balance between high-performance and low-cost, but enterprises should also seriously consider experimenting with cloud bursting. When executed with the proper tools, cloud bursting can play a major part in any enterprise’s transition to a fully-optimized hybrid cloud.
In theory, a cloud burst should consist of a temporary extension of a workload to a cloud environment — ideally when the workload’s current processor is approaching capacity — which releases its computing resources as soon as the job is done. This may occur as relocation from a physical, on-premises server to a private cloud, but in the context of hybrid clouds, it most often implies the migration of a workload from a private cloud to a public cloud.
Because these migrations usually occur over a WAN, a certain amount of latency will inevitably occur. The latency factor is exacerbated, however, if an enterprise’s application stacks aren’t designed to scale seamlessly into a cloud. If, for example, the operations of a front-end engine are shifted into a public cloud, but the associated data management program remains in a private cloud, the increased efficiency initially offered by the unburdened public cloud resources is negated by the endless communication that must take place between the two clouds. Anytime relevant data gets “left behind” during a cloud burst, the benefit to the burst will be marginal at best, as latency leads to flagging application performance, which is what cloud bursts are designed to rectify in the first place.
The reality is, in order to be effective, cloud bursts need to be planned both precisely and well in advance. Not only do spontaneous bursts introduce unwanted latency into the picture, they may leave an enterprise with no choice but to purchase on-demand cloud instances, which, regardless of vendor, are always the most expensive bundles of computing resources.
Provided an enterprise is able to predict its future capacity needs with a moderate degree of precision, it should be able to reserve the requisite instances at a much lower price. This will allow IT administrators to pre-stage properly-configured virtual machines (VMs) and the data these VMs will likely require on the public cloud. This way, when the enterprise’s private cloud experiences a spike in traffic, the public cloud will be ready to provide relief without any of the problems described above.
This solution may appear foolproof on paper, but in practice, it requires a tremendous amount of automation. An IT team might be able to manage a small collection of cloud bursts using this strategy, but savings aren’t really achieved until bursts are completely integrated into an enterprise’s IT activity. Edward Haletky, the principal analyst at The Virtualization Practice, argues that effective deployment of cloud bursting “requires absolute trust in the automation, proper monitoring and control.” He continues, “Are companies doing this? I’m sure they are, but I only know of really big companies.”
Haletky’s first observation is clearly correct: cloud bursting has the potential to be disastrous without trustworthy automation, monitoring, and control. His next assumption, though, may not be entirely accurate. The level of automation, monitoring, and control required to cloud burst responsibly does indeed demand that an enterprise has the capabilities of a “really big company,” but there are ways to secure such capabilities without first becoming an industry giant.
For instance, by employing tools like those included in the Vityl suite, enterprises can effectively track applications as cloud bursts are conducted, allowing them to maximize performance and minimize costs simultaneously. Vityl can’t independently deliver the balance sheet of a “really big company,” but it can offer cloud management tools that are among the best in the business, and more and more executives are beginning to realize that, contrary to traditional belief, good IT can drive not only innovation, but revenue.