Queuing theory is the study of queues, otherwise known as waiting lines. It sounds straightforward. But unless you have an advanced math degree, queuing theory can be difficult to understand.
That’s why we’re clarifying queuing theory basics.
The basic formula behind queuing theory is Little’s Law. MIT defines it as “the average number of items in a queuing system equals the average rate at which items arrive multiplied by the average time that an item spends in the system.”
The math behind queuing theory is complex. But queuing theory itself rests on a simple concept. Things—like people or systems—form lines when waiting to do something. And these things and lines create queuing systems.
You can use queuing theory formulas to figure out the average waiting time—and determine how many resources you’ll need to meet increases in demand. (Or how many you should drop if demand drops.)
Queuing theory is everywhere.
Chances are, you see some form of queuing theory every day. It happens at every store and restaurant any time someone waits in a line. That’s why queuing theory is also known as waiting line theory.
In these scenarios, there’s a queuing system full of:
With each queuing system, there are things to measure like:
Making sure you have enough lanes open doesn’t mean just looking at the number of customers standing in line. There are other variables to consider—like how long the average wait time is in each line or what happens if one of the cashiers gets distracted.
Queuing theory formulas help you take all variables into consideration. And this makes it easier for you to make informed decisions for your business.
Think of the check-out system at the grocery story. Customers are constantly arriving and getting in line to pay for their grocery items.
But not all customers are the same. Some customers get a lot of groceries. Some just get a few groceries.
Queuing theory can help the grocery store manager make sure the queuing system is as efficient as possible.
Queuing theory formulas help you take an average of the variables—like customers, average cart size, and wait time.
This helps grocery store managers understand if you have enough people working the check-out lines. It also determines at which point the queuing system will get backed up—and how to catch up.
When grocery stores apply queuing theory, you'll spend less time waiting in lines.
Queuing theory comes into play at fast food restaurants, too. But the role of queuing theory has changed since these restaurants first sprung up.
Decades ago, you simply drove in, went up to the window, placed your order, and paid. Then the workers cooked your food.
But this led to a lot of waiting in lines. So busier restaurants came up with an “order” window and “pick-up” window set-up. Multiple orders could be processed at the same time, so you wouldn’t need to wait as long in the “order” waiting line. But during busy periods, you’d wind up waiting in the “pick-up” line instead.
Fast-forward to today. Large fast food restaurant chains have taken advantage of analysis tools to determine the distribution of orders on time-of-day and day-of-week. Based on that analysis, management adjusts schedules and even prepares food in advance (to be kept warm by heat lamps).
That means that when you go in at a peak time, you won’t have to wait long to order or pick up your food.
When it comes to queuing theory in IT, it gets a bit more mathematical. It’s typically applied to the capacity of servers and systems.
Instead of customers waiting to make purchases, there are workloads waiting to get work done. And, instead of cashiers processing orders, there are servers processing workloads. So, workloads queue up to wait for servers.
In this case, the queue is the mechanism for increasing performance. But there’s more than the queue to consider.
Take latency—the time it takes to do the work plus the time spent waiting in the queue. Your goal is to keep latency down and keep the servers processing workloads as quickly as possible.
Queuing theory can help you measure and analyze the performance of systems and servers. Here’s how. Use data collectors to gather information. And then use that information to build queuing network models. These can help you understand what’s happening now—and what might happen in the future.
Queuing theory gives you the ability to predict the future—and make sure you can meet demand.
Your organization will inevitably reach a point where the number of systems and servers running today isn’t enough. When it gets to that point, you can do one of two things.
Queuing theory takes the guesswork out of these decisions, so you can effectively plan for capacity at your organization. You’ll be able to definitively know what you’ll need if, say, you add 10 percent more users. And this means you can plan out your budget better for the long-term.
So, if you predict a problem in January, you can fix it before it actually becomes a problem. That’s much more effective (and cost-effective) than throwing money at the problem (after it happens) and hoping it will work out.
Nobody likes to wait in lines. But they’re a necessary and inevitable part of daily life. And with queuing theory, you can make them as short and efficient as mathematically possible.
This means you can:
Learn more about applying capacity planning to queuing theory.