Mitigating Risk—and Cost—in Increasingly Virtualized Environments

    June 13, 2017
    avatar

    By Jeff Schultz

    It’s no secret that IT infrastructures are becoming increasingly dependent upon virtualized environments. Find out the best way to capitalize on the unique advantages of virtual machines—without being blindsided by their drawbacks.

    According to the 2016 Cloud Computing Executive Summary, American enterprises are expected to dedicate $1.77 million per organization to cloud computing expenses in 2017. Thanks in no small part to its utilization of virtual machines (VM), cloud computing offers enterprises levels of efficiency and reliability that were previously unthinkable.

    However, these benefits aren’t without their tradeoffs. Some cloud provider contracts introduce enterprises to financial risk. And the costs of operating highly virtualized computing environments can spiral out of control if not properly managed.

    It’s true that there are many strategies for balancing costs and performance and fully realizing the “promise of the cloud.” But enterprises looking to establish a competitive advantage would be wise to attend to the pivotal role VMs play in this high-stakes high-wire act.

    The Power and Problems of Virtual Machines

    As technological advances have enabled CPUs to add additional cores and increase computing power, memory has become the limiting factor in VM performance. Memory capabilities have seen their own advances, but the demands of newer operating systems (OS) and applications have undergone a correlative increase, nullifying any improvement in efficiency offered by better memory.

    VM memory introduces new complexity to ordinary RAM functions, as the process of virtualization adds distinct, mutually-unaware layers that, while offering administrators tremendous operative flexibility, can often be challenging to manage.

    In a virtualized environment, the hypervisor emulates a full hardware setup for each machine, allowing multiple VMs to utilize the resources of a single device without being compatible with each other.

    A hypervisor allocates a determined amount of physical memory to each VM hosted on the actual hardware. When resources become scarce, the hypervisor will create swap files for the VMs, moving less recently accessed memory pages to hard disks and freeing up RAM for more pressing processes.

    On the one hand, this guarantees that virtual hardware almost never fails (unless the underlying physical hardware fails). On the other, when a swapped page needs to be recalled, the VM will end up waiting on the disk drive.

    The issue is that VMs cannot tell whether they are accessing actual physical memory on the host infrastructure or swap files, making managing performance unusually difficult.

    VMs are not always aware of what physical resources they are utilizing. So systems administrators must monitor both the virtual and actual host infrastructures for signs of either resource over-commitment or over-provisioning. This threatens performance and affordability.

    Three Strategies for Optimizing Virtual Machines

    Fortunately, there are strategies that help organizations take full advantage of the benefits that VMs have to offer while simultaneously minimizing their drawbacks.

    1. Disable Transparent Page Sharing

    Administrators should disable Transparent Page Sharing (TPS) by their hypervisors. When this function is enabled, a hypervisor is looking out for instances in which multiple VMs are using identical memory pages and, instead of maintaining duplicates, points all of the VMs to the same memory page, freeing up the space previously used by duplicates for other tasks.

    From a resource conservation perspective this strategy is quite useful, but from a management perspective, it makes tracking over-commitment significantly more difficult. As such, if an enterprise is attempting to monitor and manage a complex IT infrastructure, it’s advisable to avoid TPS altogether.

    2. Strategically Mix VMs on Host Systems

    Strategically mixing the kinds of VMs assigned to each host system can ensure that physical resources are fully exploited. Provided an administrator sets the proper reservations and limits, interspersing production VMs with test and development VMs allows an enterprise to privilege its core processes while still keeping lower-priority tasks running until there is a resource shortage.

    3. Implement Tools That Work on All Environments

    Finally, enterprises must make sure they’re implementing tools that are capable of monitoring risk across all computing environments, not just virtual ones.

    Most enterprises are supported by a combination of on-premises, off-premises, virtualized, and cloud IT infrastructures. Without flexible tools, administrators would have to constantly be switching monitoring software—a recipe for confusion.

    Capacity management software—like the Vityl suite from TeamQuest—automatically delivers notifications regarding an enterprise’s current IT health and future risk, regardless of the infrastructure type(s) or vendor(s) in place.

    This type of software assists administrators in comparing the cost of cloud services with any in-house infrastructure. It also helps them gain visibility into physical servers, hypervisors, individual guest OSs, guest OS interactions, and networking and storage functions. This means they can allocate the optimal amount of resources to each VM—and finding and eliminating unused VMs.

    Maximizing the efficiency of VMs is an essential—and challenging—task. When selecting a software vendor, choose one that enables the full integration of monitoring functions across an enterprise’s IT infrastructure. Without a doubt, this kind of comprehensive approach to IT management is the best way forward as the industry becomes increasingly virtualized and cloud-based.

    Ready to learn more about managing virtual environments?

    Read the white paper


    Category: virtualization