A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queues of the protocol stack and network interface controller. There are several network schedulers available for the different operating systems, that implement many of the existing network scheduling algorithms.
The network scheduler logic decides which network packet to forward next. The network scheduler is associated with a queuing system, storing the network packets temporarily until they are transmitted. Systems may have a single or multiple queues in which case each may hold the packets of one flow, classification, or priority.
In some cases it may not be possible to schedule all transmissions within the constraints of the system. In these cases the network scheduler is responsible for deciding which traffic to forward and what gets dropped.
A network scheduler may have responsibility in implementation of specific network traffic control initiatives. Network traffic control is an umbrella term for all measures aimed at reducing network congestion, latency and packet loss. Specifically, active queue management (AQM) is the selective dropping of queued network packets to achieve the larger goal of preventing excessive network congestion. The scheduler must choose which packets to drop. Traffic shaping smooths the bandwidth requirements of traffic flows by delaying transmission packets when they are queued in bursts. The scheduler decides the timing for the transmitted packets. Quality of service (QoS) is the prioritization of traffic based on service class (Differentiated services) or reserved connection (Integrated services).
In the course of time, many network queueing disciplines have been developed. Each of these provides specific reordering or dropping of network packets inside various transmit or receive buffers.[1] Queuing disciplines are commonly used as attempts to compensate for various networking conditions, like reducing the latency for certain classes of network packets, and are generally used as part of QoS measures.[2] [3] [4]
Classful queueing disciplines allow the creation of classes, which work like branches on a tree. Rules can then be set to filter packets into each class. Each class can itself have assigned other classful or classless queueing discipline. Classless queueing disciplines do not allow adding more queueing disciplines to it.[5]
Examples of algorithms suitable for managing network traffic include:
Generic cell rate algorithm | GCRA | |||
CHOose and Kill for unresponsive flows | CHOKe | Classless | ||
Controlled delay | CoDel | Classless | ||
Common Applications Kept Enhanced[6] | CAKE | |||
Earliest TxTime First | ETF | Classless | ||
First in, first out | FIFO | Classless | ||
Fair queuing | FQ | Classless | ||
Fair Queuing Controlled Delay | FQ-CoDel | Classless | ||
Flow Queuing with Proportional Integral controller Enhanced | FQ-PIE | Classless | ||
Generalized Random Early Detection | GRED | Classless | ||
Heavy-Hitter Filter[7] | HHF | Classless | ||
Multiqueue Priority | MQ-PRIO | Classless | ||
Multiqueue | MULTIQ | Classless | ||
Network Emulator[8] | NETEM | Classless | ||
Proportional Integral controller-Enhanced[9] | PIE | Classless | ||
Random early detection | RED | Classless | ||
Stochastic fair Blue | SFB | Classless | ||
Stochastic Fairness Queueing | SFQ | Classless | ||
Token Bucket Filter | TBF | Classless | ||
Class-based queueing | CBQ | Classful | ||
Credit-Based Shaper | CBS | Classful | ||
Deficit round robin[10] | DRR | Classful | ||
Enhanced Transmission Selection | ETS | Classful | ||
Hierarchical fair-service curve | HFSC | Classful | ||
Hierarchical Token Bucket[11] | HTB | Classful | ||
Priority | PRIO | Classful | ||
Quick Fair Queueing[12] | QFQ | Classful | ||
Time Aware Priority Shaper | TAPRIO | Classful |
Several of the above have been implemented as Linux kernel modules[13] [14] and are freely available.
Bufferbloat is a phenomenon in packet-switched networks in which excess buffering of packets causes high latency and packet delay variation. Bufferbloat can be addressed by a network scheduler that strategically discards packets to avoid an unnecessarily high buffering backlog. Examples include CoDel, FQ-CoDel and random early detection.
The Linux kernel packet scheduler is an integral part of the Linux kernel's network stack and manages the transmit and receive ring buffers of all NICs, by working on the layer 2 of the OSI model and handling Ethernet frames, for example.
The packet scheduler is configured using the utility called [[Tc (Linux)|tc]]
(short for "traffic control"). As the default queuing discipline, the packet scheduler uses a FIFO implementation called pfifo_fast,[15] although systemd since its version 217 changes the default queuing discipline to fq_codel
.[16]
The [[ifconfig]]
and [[Iproute2|ip]]
utilities enable system administrators to configure the buffer sizes txqueuelen
and rxqueuelen
for each device separately in terms of number of Ethernet frames regardless of their size. The Linux kernel's network stack contains several other buffers, which are not managed by the network scheduler.
Berkeley Packet Filter filters can be attached to the packet scheduler's classifiers. The eBPF functionality brought by version 4.1 of the Linux kernel in 2015 extends the classic BPF programmable classifiers to eBPF.[17] These can be compiled using the LLVM eBPF backend and loaded into a running kernel using the tc
utility.[18]
ALTQ is the implementation of a network scheduler for BSDs. As of OpenBSD version 5.5 ALTQ was replaced by the HFSC scheduler.