What Is Queueing Theory? A Primer on the Basics

    February 17, 2016

    By Jeff Schultz

    Though it’s become fairly easy to predict what computers will do, what’s harder to guess is how much stress will be put on them by the people who use them. Applied mathematics offer a neat resolution to this problem.

    Despite the negative impact of overprovisioning, many companies choose to buy more IT infrastructure than they need. Why? Because they hope to safeguard themselves against the possibility of downtime by boosting their server capacity. And while this response to the need for capacity is certainly expensive, what these companies are dealing with isn’t a spending problem — they have a modeling problem.  

    Anticipating CPU usage is surprisingly tricky, and many IT leaders, fearful that workloads will spike and down their services, overprovision their hardware to keep CPU usage well below 100%. This is because CPU usage is nonlinear, and when servers reach certain unknown usage levels, it’s difficult to predict where they’ll go — better safe, never reaching that threshold, than sorry.

    However, there is a surefire way to predict CPU accurately, and it involves a healthy dosage of applied mathematics.

    The Basics of Queuing Theory

    The complex math of queuing theory rests on the simple concept that things, when waiting to do something, form lines. In IT, this corresponds to workloads, which wait to get work done, and servers, which do the work, performing it at a certain speed. Workloads form queues in front of servers.

    Think of it like the checkout system at a grocery store. In a store, customers come in at certain arrival rates and then wait in line at checkout counters (servers) — queuing theory determines the fastest way to get the shoppers out the door.

    Of course, there can be multiple checkout counters (servers) and queues. Or, for instance, in the case of airport security, there can be one very long line that feeds many servers. These are all known IT infrastructure variables, which means that capacity planning can be easily “solved” with queueing theory.

    There are two general models we use to produce efficient solutions to the problem of high demand: queueing network solvers and event simulation techniques.

    Analytic queueing network solvers have the added benefit of being extremely time-efficient in terms of setup and run time and are the preferred queueing method for day-to-day operations. However, event simulation works extremely well with complex network environments and for modeling specific “what if” scenarios.

    Getting a sense of what would happen in hypothetical doomsday scenarios, for example, gives IT professionals a sense of their infrastructure’s limitations. A good capacity planning strategy will use both of these models.

    Pairing Accuracy and Cost-Efficiency

    Those companies who avoid using queueing theory applications — constantly ramping up server hardware to keep usage levels within their models — are doomed to fall into a double trap: they overspend on resources, but incur outages anyway because they can’t predict future CPU bottlenecks. In a word, this strategy brings with it inherent uncertainty.

    But with queueing performance modeling, companies can purchase the right amount of hardware in just the right places and in the most cost-effective manner — the goal of capacity management. For example, our automated predictive analytics, which run thousands of queueing models simultaneously, point to specific weaknesses in your hardware, alerting you when certain processes are likely to cause bottlenecks.

    This makes it simple for businesses to maintain a forward-looking strategy, provisioning effectively months and years in advance of increased demands and service spikes. In this way, IT leaders can ascertain the health and risk of their organization’s infrastructure, displayed by our software as single, straightforward numbers.

    While nobody likes to wait in lines, they’re a necessary and inevitable part of daily life. But with queuing theory, you can make them as short and efficient as mathematically possible.

    (Main image credit: Wikimedia)

    Tags: teamquest
    Category: capacity-planning