Detailed explanation: Three technical routes to achieve wide-area deterministic networks

Detailed explanation: Three technical routes to achieve wide-area deterministic networks

Last time, we introduced the background of deterministic networks and mentioned that deterministic networks are not a single technology, but a collection of protocols and mechanisms (Deterministic Networks: Building a Super High-Speed ​​Rail in the Network). So what are the technologies, how are they implemented, and how mature are they? This time, let's talk about three technical routes to achieve wide-area deterministic networks.

[[399575]]

1. 1-layer dynamic optoelectronic cross-connection/1.5-layer hard pipe slicing

The simplest deterministic network is the all-optical network, such as bare optical fiber with point-to-point connection, which has extremely stable service quality. However, this method cannot statistically multiplex bandwidth, does not support point-to-multipoint transmission, is expensive, and wastes a lot of optical fiber resources. Therefore, if you want to network flexibly and cheaply, you still have to introduce Ethernet based on packet switching. So there is OTN (Optical Transport Network), which manages the electrical and optical domains in a unified manner, can provide huge transmission capacity, fully transparent end-to-end wavelength connection, and carrier-level protection; because OTN has powerful overhead, maintenance management, and networking capabilities, it has become the main technology for wide-area bearer.

So does OTN have any problems? It has two main problems.

First, it is difficult to dynamically cross-connect optical and electrical networks. OTN mainly uses wavelength division multiplexing and time division multiplexing technologies to schedule resources, which requires the establishment of virtual circuit connections in advance before signals can be transmitted on the channel. OTN can cross-connect at the optical layer, at the electrical layer, and at both the optical and electrical layers, but the overhead and complexity will increase as time goes by. In 2019, Nokia Bell Labs proposed the concept of deterministic dynamic network (DDN) and OE (Optical-Ethernet) technology, which jointly schedules optical layer time slot resources and Ethernet queue resources, dynamically establishes and releases connections, and expects time division multiplexing to achieve the bandwidth utilization effect of statistical multiplexing. It uses methods such as processing only packet headers at intermediate nodes, performing FEC error checking only at edge nodes, and strict priority scheduling to reduce management and control overhead, and has experimented with using DDN for deterministic interconnection between edge clouds.

Second, the bandwidth granularity is not fine enough and not flexible enough. Currently, OTN can only have a minimum electrical layer bandwidth granularity of 2.5Gb/s. To this end, the OIF Optical Internetworking Forum proposed the FlexE flexible Ethernet technology, a technology that provides fine-grained hard pipe slicing and service isolation at layer 1.5. FlexE adds the FlexE Shim layer between the Ethernet L2/L1 layers. Through the time division multiplexing distribution mechanism, it schedules and distributes the data of multiple client interfaces to multiple different sub-channels in time slots, so that the network has the characteristics of exclusive time slots and good isolation similar to time division multiplexing, and also has the characteristics of Ethernet statistical multiplexing and high network efficiency.

FlexE has three application modes: link bundling mode, sub-rate mode, and channelization mode. Link bundling mode is to bundle multiple physical channels to form a large logical channel to achieve high-traffic service transmission. Sub-rate mode means that when the service rate of a single customer is lower than the rate of a physical channel, multiple customer rates are aggregated to share a physical channel to improve the bandwidth utilization of the physical channel. Channelization mode is that customer services are distributed on multiple time slots of multiple different physical channels, and multiple customers share multiple physical channels.

To put it simply, FlexE is an interface technology that decouples the Ethernet interface rate from the optical interface rate. You can combine four 100G interfaces to use as a 400G interface, or you can divide a 100G interface into 1G for dedicated services. Currently, OIF is considering implementing MB-level small-granular slicing.

To summarize, the first technical route is to use technologies such as OE and FlexE to perform scheduling at the optical layer/1.5 layer, and isolate through hard pipe slicing to ensure deterministic transmission at the bandwidth and service levels.

2. Network Algorithm + Software Defined Queuing

After ensuring deterministic bandwidth, how to ensure deterministic latency and jitter? Technical route one can only guarantee determinism at the business level. In order to ensure determinism at the per-flow and per-packet level, network calculation and software-defined queue technology are also required.

In terms of QoS guarantee, there have long been methods to guarantee delay, such as intelligent routing, routing with delay constraints, and joint scheduling of bandwidth and delay, but most of these methods are similar to the DiffServ model, and the result is that the average delay/jitter in a statistical sense has become smaller, which is an optimized average indicator. The average indicator is actually related to the load. When the load is small, the average delay will naturally be small. Determinism requires that the worst delay is bounded, that is, in the case of flow bursts, polycasting, etc., the maximum end-to-end delay of the flow still does not exceed a certain value; at the same time, low delay is satisfied, that is, the worst delay is close to the minimum delay, thereby reducing delay variation (jitter). At this time, deterministic network algorithm theory is needed.

First, from the perspective of a single node and a single flow, deterministic network calculation theory states that if the arrival rate, burst size, arrival time, and service rate of a flow are known, then the service time of the flow at the node can be calculated. From the perspective of multiple flows on a single node, if the above information of multiple flows is known, the waiting time and service time of each flow can be calculated according to the convolution superposition principle, thereby calculating the hop-by-hop worst-case delay of each flow. Finally, from the perspective of multiple nodes and multiple flows, that is, from the perspective of the entire system, if the entire network is manageable and controllable through SDN, and the above information of the network and the flow is known, the theoretical end-to-end delay upper bound of each flow through the network system can be obtained by controlling the packet sending rate, adjusting the packet sending time, edge shaping, etc., so as to select the appropriate routing path and scheduling method that meets the delay upper bound requirements for the flow in advance.

In fact, network-calculation-based mechanisms such as IntServ and asynchronous shapers have existed for a long time. However, they have not been widely deployed because they need to provide a scheduler for each flow, the core nodes need to maintain the status of each flow, there are scalability issues, and the massive traffic in a wide area will change dynamically in real time, requiring powerful real-time measurement, global management, and precise control capabilities.

In addition, the waiting time of each flow is related to its priority. In the absence of preemption, a flow must wait for flows with higher priority than it to finish transmission, as well as flows with the same priority that arrived earlier to finish transmission before it can start transmission. Obviously, the more queue priorities there are, the finer the granularity of flow scheduling. Traditional switch output ports have only 8 priority queues, which are scheduled based on service classes. Therefore, a software-defined queue method has been proposed, which can create up to 65,000 queues per port, hoping to achieve a scheduling granularity of per queue per flow. By dynamically creating queues, deleting queues, and modifying queue scheduling algorithms, scheduling methods are made more flexible and latency calculations are made more precise.

3. Cycle-based round-robin queue scheduling

Is there a highly scalable wide-area deterministic scheduling mechanism? Yes, there are mechanisms such as multi-queue round-robin queuing and forwarding, scalable deterministic forwarding, cycle-specified round-robin queuing and forwarding, Paternoster, etc. They are all cycle-based round-robin queuing scheduling mechanisms. Next, we will take the cycle-specified round-robin queuing and forwarding (CSQF) mechanism proposed by the IETF DetNet working group [1][2] as an example to introduce.

CSQF is a multi-queue cyclic scheduling mechanism based on multiple cycles. It requires frequency synchronization between devices, cyclic queuing and forwarding of device output port queues, and maintenance of the cycle mapping relationship between adjacent nodes. Finally, a segment routing identifier (SID) list with cycle information is attached to the data packet. As shown in the figure below, when the sender wants to send a time-sensitive flow to the receiver, the connection establishment workflow is as follows: (1) The centralized network controller collects quality of service requests. (2) The controller generates SIDs by calculating feasible paths and cycles that meet the constraints. (3) The controller assigns SID label stacks to the sender and network devices along the path.

SID specifies the egress port and transmission cycle for transmitting data packets on each node (hop). For example, 4076 means that the packet is transmitted in cycle 6 of port 7 of node 4. Therefore, CSQF-enabled devices can forward time-sensitive packets with precise reservation time by consuming the first SID available in the packet header label stack. In addition, segment routing is not only a feasible method to implement explicit routing, but also a source routing technology that does not require maintaining per-flow state at intermediate nodes and egress nodes. Therefore, it has good scalability and can schedule large-scale and massive traffic.

This technical route is consistent with the synchronous scheduling mechanism of TSN time-sensitive network, both of which are based on the idea of ​​time division multiplexing; it is hoped that by constraining the maximum queue length, the queuing delay can be controlled; finding the appropriate hop-by-hop transmission delay can ensure that the end-to-end worst-case delay is bounded. In addition, DetNet also proposes three layers of deterministic technology, including explicit routing, jitter reduction, and packet replication and elimination, based on TSN. The integration of TSN and DetNet is one of the main development directions of wide-area deterministic networks.

IV. Conclusion

Finally, as summarized in the figure below, I believe that the common point of deterministic networks and similar technologies is to resolve the contradiction between performance guarantee and resource sharing. Ethernet's best-effort packet forwarding is superior in that it can fully realize resource sharing, and the future trend of deterministic networks is to achieve a balance between performance guarantee and resource sharing based on IP Ethernet through service isolation and resource reservation at the lowest possible cost and overhead.

[1] https://datatracker.ietf.org/doc/html/draft-chen-detnet-sr-based-bounded-latency

[2] S. Chen, J. Leguay, S. Martin and P. Medagliani, “Load Balancing for Deterministic Networks,” 2020 IFIP Networking Conference (Networking), 2020, pp. 785-790.

About the author: Huang Yudong is a second-year graduate student at the State Key Laboratory of Network and Switching, Beijing University of Posts and Telecommunications. His research interests include future network architecture and deterministic networks. Email address: [email protected].

<<:  5G+Industrial Internet, making manufacturing "smart" is no longer a dream

>>:  Major network IT investments include 5G, smart network cards, and cloud computing

Recommend

How 5G Promotes Smart City Development

Global examples of how smart cities are leveragin...

Mobile edge computing provides unlimited possibilities for 5G innovation

At the "2017 China MEC Industry Development ...

I'm stunned! Why is the latency so high for a simple HTTP call?

Recently, a strange phenomenon occurred during pr...

Does it just look familiar? What is the advantage of 802.11ac Wave2?

When choosing wireless routers or APs, especially...

Expectations for Network as a Service (NaaS) Technology

Network as a Service (NaaS) technology provides n...

HTTP interview, 99% of interviewers like to ask these questions

[[322727]] Differences between HTTP and HTTPS HTT...

What is 6G and when will it be launched?

Is this what comes after 5G? Since 5G networks ar...

...

How is the VoLTE development of the three major operators?

There are always various opportunities for indust...

Token: How to reduce the traffic pressure of user identity authentication?

Many websites usually use the Session method to i...