Queueing theory is the mathematical study of waiting lines, or queues.[1] A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.
Queueing theory has its origins in research by Agner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company. These ideas were seminal to the field of teletraffic engineering and have since seen applications in telecommunication, traffic engineering, computing,[2] project management, and particularly industrial engineering, where they are applied in the design of factories, shops, offices, and hospitals.[3] [4]
The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field is Queueing Systems.
Queueing theory is one of the major areas of study in the discipline of management science. Through management science, businesses are able to solve a variety of problems using different scientific and mathematical approaches. Queueing analysis is the probabilistic analysis of waiting lines, and thus the results, also referred to as the operating characteristics, are probabilistic rather than deterministic.[5] The probability that n customers are in the queueing system, the average number of customers in the queueing system, the average number of customers in the waiting line, the average time spent by a customer in the total queuing system, the average time spent by a customer in the waiting line, and finally the probability that the server is busy or idle are all of the different operating characteristics that these queueing models compute. The overall goal of queueing analysis is to compute these characteristics for the current system and then test several alternatives that could lead to improvement. Computing the operating characteristics for the current system and comparing the values to the characteristics of the alternative systems allows managers to see the pros and cons of each potential option. These systems help in the final decision making process by showing ways to increase savings, reduce waiting time, improve efficiency, etc. The main queueing models that can be used are the single-server waiting line system and the multiple-server waiting line system, which are discussed further below. These models can be further differentiated depending on whether service times are constant or undefined, the queue length is finite, the calling population is finite, etc.
A queue or queueing node can be thought of as nearly a black box. Jobs (also called customers or requests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue.
However, the queueing node is not quite a pure black box since some information is needed about the inside of the queueing node. The queue has one or more servers which can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job.
An analogy often used is that of the cashier at a supermarket. (There are other models, but this one is commonly encountered in the literature.) Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with no buffer (or no waiting area). A setting with a waiting zone for up to n customers is called a queue with a buffer of size n.
See also: Survival analysis. The behaviour of a single queue (also called a queueing node) can be described by a birth–death process, which describes the arrivals and departures from the queue, along with the number of jobs currently in the system. If k denotes the number of jobs in the system (either being serviced or waiting if the queue has a buffer of waiting jobs), then an arrival increases k by 1 and a departure decreases k by 1.
The system transitions between values of k by "births" and "deaths", which occur at the arrival rates
λi
\mui
i
λ=avg(λ1,λ2,...,λk)
\mu=avg(\mu1,\mu2,...,\muk)
The steady state equations for the birth-and-death process, known as the balance equations, are as follows. Here
Pn
\mu1P1=λ0P0
λ0P0+\mu2P2=(λ1+\mu1)P1
λn-1Pn-1+\mun+1Pn+1=(λn+\mun)Pn
The first two equations imply
P1=
λ0 | |
\mu1 |
P0
P2=
λ1 | |
\mu2 |
P1+
1 | |
\mu2 |
(\mu1P1-λ0P0)=
λ1 | |
\mu2 |
P1=
λ1λ0 | |
\mu2\mu1 |
P0
By mathematical induction,
Pn=
λn-1λn-2 … λ0 | |
\mun\mun-1 … \mu1 |
P0=P0
n-1 | |
\prod | |
i=0 |
λi | |
\mui+1 |
The condition
infty | |
\sum | |
n=0 |
Pn=P0+P0
infty | |
\sum | |
n=1 |
n-1 | |
\prod | |
i=0 |
λi | |
\mui+1 |
=1
P0=
1 | |||||||||||||||||
|
Pn
(n\geq1)
See main article: Kendall's notation. Single queueing nodes are usually described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs, and c the number of servers at the node.[6] [7] For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process (where inter-arrival durations are exponentially distributed) and have exponentially distributed service times (the M denotes a Markov process). In an M/G/1 queue, the G stands for "general" and indicates an arbitrary probability distribution for service times.
Consider a queue with one server and the following characteristics:
λ
\mu
Pn
Further, let
En
Ln
\left\vertEn-Ln\right\vert\in\{0,1\}
En=Ln
\left\vertEn-Ln\right\vert=1
When the system arrives at a steady state, the arrival rate should be equal to the departure rate.
Thus the balance equations
\muP1=λP0
λP0+\muP2=(λ+\mu)P1
λPn-1+\muPn+1=(λ+\mu)Pn
Pn=
λ | |
\mu |
Pn-1, n=1,2,\ldots
P0+P1+ … =1
Pn=(1-\rho)\rhon
\rho=
λ | |
\mu |
<1
A common basic queueing system is attributed to Erlang and is a modification of Little's Law. Given an arrival rate λ, a dropout rate σ, and a departure rate μ, length of the queue L is defined as:
L=
λ-\sigma | |
\mu |
Assuming an exponential distribution for the rates, the waiting time W can be defined as the proportion of arrivals that are served. This is equal to the exponential survival rate of those who do not drop out over the waiting period, giving:
\mu | |
λ |
=e-W{\mu
The second equation is commonly rewritten as:
W=
1 | ln | |
\mu |
λ | |
\mu |
The two-stage one-box model is common in epidemiology.[8]
In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory.[9] [10] [11] He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920.[12] In Kendall's notation:
If the node has more jobs than servers, then jobs will queue and wait for service.
The M/G/1 queue was solved by Felix Pollaczek in 1930,[13] a solution later recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula.
After the 1940s, queueing theory became an area of research interest to mathematicians.[14] In 1953, David George Kendall solved the GI/M/k queue[15] and introduced the modern notation for queues, now known as Kendall's notation. In 1957, Pollaczek studied the GI/G/1 using an integral equation.[16] John Kingman gave a formula for the mean waiting time in a G/G/1 queue, now known as Kingman's formula.[17]
Leonard Kleinrock worked on the application of queueing theory to message switching in the early 1960s and packet switching in the early 1970s. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in the ARPANET, a forerunner to the Internet.
The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered.[18]
Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing.[19]
Modern day application of queueing theory concerns among other things product development where (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration.[20]
Problems such as performance metrics for the M/G/k queue remain an open problem.
Various scheduling policies can be used at queueing nodes:
Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed.[27]
Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue.
Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network.
For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node.
The simplest non-trivial networks of queues are called tandem queues.[28] The first significant results in this area were Jackson networks,[29] [30] for which an efficient product-form stationary distribution exists and the mean value analysis[31] (which allows average metrics such as throughput and sojourn times) can be computed.[32] If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem.[33] This result was extended to the BCMP network,[34] where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. The normalizing constant can be calculated with the Buzen's algorithm, proposed in 1973.[35]
Networks of customers have also been investigated, such as Kelly networks, where customers of different classes experience different priority levels at different service nodes.[36] Another type of network are G-networks, first proposed by Erol Gelenbe in 1993:[37] these networks do not assume exponential time distributions like the classic Jackson network.
See also: Stochastic scheduling. In discrete-time networks where there is a constraint on which service nodes can be active at any time, the max-weight scheduling algorithm chooses a service policy to give optimal throughput in the case that each job visits only a single-person service node. In the more general case where jobs can visit more than one node, backpressure routing gives optimal throughput. A network scheduler must choose a queueing algorithm, which affects the characteristics of the larger network.[38]
Mean-field models consider the limiting behaviour of the empirical measure (proportion of queues in different states) as the number of queues m approaches infinity. The impact of other queues on any given queue in the network is approximated by a differential equation. The deterministic model converges to the same stationary distribution as the original model.[39]
See main article: Heavy traffic approximation. In a system with high occupancy rates (utilisation near 1), a heavy traffic approximation can be used to approximate the queueing length process by a reflected Brownian motion,[40] Ornstein–Uhlenbeck process, or more general diffusion process.[41] The number of dimensions of the Brownian process is equal to the number of queueing nodes, with the diffusion restricted to the non-negative orthant.
See main article: Fluid limit. Fluid models are continuous deterministic analogs of queueing networks obtained by taking the limit when the process is scaled in time and space, allowing heterogeneous objects. This scaled trajectory converges to a deterministic equation which allows the stability of the system to be proven. It is known that a queueing network can be stable but have an unstable fluid limit.[42]
Queueing theory finds widespread application in computer science and information technology. In networking, for instance, queues are integral to routers and switches, where packets queue up for transmission. By applying queueing theory principles, designers can optimize these systems, ensuring responsive performance and efficient resource utilization.Beyond the technological realm, queueing theory is relevant to everyday experiences. Whether waiting in line at a supermarket or for public transportation, understanding the principles of queueing theory provides valuable insights into optimizing these systems for enhanced user satisfaction. At some point, everyone will be involved in an aspect of queuing. What some may view to be an inconvenience could possibly be the most effective method.Queueing theory, a discipline rooted in applied mathematics and computer science, is a field dedicated to the study and analysis of queues, or waiting lines, and their implications across a diverse range of applications. This theoretical framework has proven instrumental in understanding and optimizing the efficiency of systems characterized by the presence of queues. The study of queues is essential in contexts such as traffic systems, computer networks, telecommunications, and service operations.Queueing theory delves into various foundational concepts, with the arrival process and service process being central. The arrival process describes the manner in which entities join the queue over time, often modeled using stochastic processes like Poisson processes. The efficiency of queueing systems is gauged through key performance metrics. These include the average queue length, average wait time, and system throughput. These metrics provide insights into the system's functionality, guiding decisions aimed at enhancing performance and reducing wait times.References:Gross, D., & Harris, C. M. (1998). Fundamentals of Queueing Theory. John Wiley & Sons.Kleinrock, L. (1976). Queueing Systems: Volume I - Theory. Wiley.Cooper, B. F., & Mitrani, I. (1985). Queueing Networks: A Fundamental Approach. John Wiley & Sons