Queue scheduling is a core issue in computer networks. In the past few decades, a large number of scheduling algorithms have been designed to provide different features and optimize different goals in scenarios such as industrial networks, data center networks, and wide area networks. Programmable packet scheduling is the crown of data center network research in recent years. As applications continue to increase their requirements for network service quality, queue mechanisms in deterministic networks are constantly being innovated. This article will show you how to play with queues. It is divided into three sections according to the concept of queues, the evolution of queues, and the enhancement of queue determinism, revealing the core mysteries behind many mechanisms.
This article first explains the concept of queues, starting from the position of queues, and introduces the entry, scheduling, and exit processes of a single queue, presenting two classic queue forms: First-In-First-Out (FIFO) and Push-In-First-Out (PIFO), as well as related scheduling algorithms; then analyzes the evolution of the queue mechanism, extending from a single queue to multiple queues, from hardware queues to software queues, as well as scheduling granularities such as per-packet, per-flow, per-class, and per-queue; finally, explains queue enhancement mechanisms such as traffic shaping, gating, and frame preemption in deterministic networks. The concept of queueWhat is a queue?As shown in Figure 1, a switch has multiple ports. The ports are connected by a switch matrix. Data packets always enter from one port, pass through the switch matrix, and then exit from another port. In full-duplex mode, each port is both an inbound port and an outbound port. There is a storage resource called a buffer in the switch. The on-chip cache of most switches is not large, usually a few MB to tens of MB, and evenly distributed to each port is only a few hundred KB. When there is a burst of traffic, it can store dozens of packets. Although the bandwidth of a single port has grown from 1G to 400G in less than a decade, the cache has not been greatly improved. Although a large cache can reduce packet loss, it requires relatively more addressing time, which will reduce the forwarding speed of data packets, increase equipment costs and network latency. A queue is a data structure in the buffer that is used to optimize the packet loss rate and latency performance of the network for different applications. Why do we need queues?Queues mainly solve the queuing problem. If the network is a linear topology with the same bandwidth, the rate is matched everywhere, and the total traffic size does not exceed the port bandwidth, then no queue is needed. As shown in Figure 2, the actual network topology is complex, and there are traffic microbursts (such as bursts caused by TCP sliding windows) and upstream and downstream port rate mismatch problems, so queue scheduling is required. In the process of cache queuing, some applications require low latency, requiring a small cache and priority scheduling; some applications require zero packet loss, so the larger the cache, the better; some pursue network throughput and utilization, requiring bandwidth optimization; some pursue fairness, requiring queue resources to be distributed as evenly as possible; some have requirements for flow completion time, and need to reduce the long-tail latency of traffic. Meeting multiple application requirements at the same time brings huge challenges to queue scheduling. Position of the queueThere are many different opinions on where the buffer queue should be placed. As shown in Figure 3, there are four main options: output port buffer, input port buffer + virtual output queue (VOQs), crosspoint buffer, and shared buffer [1].
In general, no matter where the buffer is placed, when it comes to "queue scheduling", it is best to assume that it occurs at the switch egress port by default. Single queue scheduling:
First-in-first-out queue:
Push-in first-out queue:
Queue scheduling algorithm:
In addition, when choosing who should be prioritized, not only rank but also latency estimation can be used. For example, packets with a short deadline can be prioritized. The volume of the flow can be used. When the elephant flow and the mouse flow arrive at the same time, the mouse flow with a small volume is prioritized. In a deterministic network, an upper bound can be set for the queue length so that the traffic does not exceed the maximum value of the queue length, thereby ensuring zero packet loss and bounded low latency. The next section will introduce the evolution of the queue mechanism and the queue enhancement mechanism in deterministic networks. For more information, please see the next section. References: [1] Z. Li et al., "Time-triggered switch-memory-switch architecture for time-sensitive networking switches," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 39, no. 1, pp. 185-198, 2020. [2] Zhuolong Yu, et al., “Programmable packet scheduling with a single queue”. In SIGCOMM '21. New York, USA, 179–193. About the author: Huang Yudong is a first-year doctoral student at the State Key Laboratory of Network and Switching, Beijing University of Posts and Telecommunications. His research interests include future network architecture and deterministic networks. Email address: [email protected]. |
<<: 5G and the edge: Convergence is accelerating
>>: Transition to 5G drives demand for fiber
On the last day of last month, the New York Stock...
What is Link Aggregation? Link aggregation is a t...
Recently, I briefly studied the development histo...
Nearly 800 million of these LTE subscriptions wer...
V5 Server (V5.NET) mainly provides independent se...
In the first article of this series, we explained...
The business world is like a battlefield. Whoever...
The way businesses run their networks has remaine...
Beijing time, April 16th morning news, the larges...
[51CTO.com Quick Translation] The deployed SD-WAN...
Eurocloud has launched a July promotion, offering...
[51CTO.com original article] According to market ...
iOVZ has just launched a Western Valentine's ...
ProfitServer recently offered a 50% discount on s...
[[375451]] As a communications engineer, I am exp...