Technology Outsourcing

    For a Capacity Planner, Utilization is Never Enough

    September 29, 2015

    By Dino Balafas

    Though utilization rates are undeniably helpful when tracking service performance, they don’t always tell the whole story.

    For any company’s IT department, it’s always important to keep an eye on utilization numbers. Commonly used to predict the probable output data and general performance of a server, utilization rates help capacity planners determine if their company’s system is meeting its full potential, as explained here by Ready Ratios.

    But while these measurements are indeed precise, they don’t always tell the whole story when it comes to IT performance. Every server behaves differently, making it hard for raw numbers to accurately assess a given situation. In addition, these statistics can’t always isolate the exact point at which a service will slow down. So while any capacity planner must keep an eye on general utilization rates, a more well-rounded approach is necessary in order to generate substantial improvement.

    Alternative Strategies

    In a lot of cases, a company’s server may not be reaching peak performance despite its seemingly acceptable utilization rates. Another way to measure the speed and ability of your IT service is known as Queuing Theory. While utilization rates can accurately measure the literal amount of information your server can handle, Queuing Theory will help you understand what is actually happening throughout this process.

    As jobs and requests “line up” before a service center, the wait time for their completion predictably increases. If these requests line up more rapidly than they can be acted out, the queue will eventually begin to pile up at an exponential rate. When combined with a measured study of utilization rates, this type of insight can pave the way towards real action and tangible results.

    Although one might assume a 50% utilization rate to be a relatively safe value to run on, this is the exact point at which a server’s operational speed is cut in half. From here, the operating rate continues to decrease dramatically as the utilization rate increases. Regardless of your server’s strength, optimal utilization should never really exceed 50%.

    Assistance You Can Trust

    Still, accurately pinpointing your company’s optimal operating target can be difficult and frustrating. In order to run a successful business in an online setting, your server must be able to withstand surges of activity and the expected queues that come with it. While maintaining something below 50% utilization is a nice goal, the fact is that you will occasionally need to exceed this limit in order to thrive.

    While your IT department will be much better equipped to succeed with an awareness of utilization rates and Queuing Theory, the ability to apply said information in a practical, business-minded way can often remain elusive.

    Enlisting the help of an experienced and tech-savvy partner like TeamQuest could be exactly what your company needs. Utilizing their powerful tools and strategies like AutoPredict will allow you to accurately determine the exact point at which your server’s response time begins to slow down.

    Since different operating systems all have unique utilization thresholds, a resource like AutoPredict will come as a welcome change. After collecting the necessary reports and data, TeamQuest can help you collect and understand all relevant reports and data, serving as your guide in navigating the informative but confusing landscape of performance capacity and utilization rates.

    (Main image credit: Avnee Cooper/flickr)

    Category: capacity-planning