Application of load balancing technology in computing power network

Application of load balancing technology in computing power network

Part 01, ECMP

ECMP is a hop-by-hop, flow-based load balancing strategy. When a router finds multiple optimal paths to the same destination address, it updates the routing table and adds multiple rules for this destination address, corresponding to multiple next hops. These paths can be used to forward data at the same time to increase bandwidth. The ECMP algorithm is supported by many routing protocols, such as OSPF, ISIS, EIGRP, BGP, etc. ECMP is also mentioned as a load balancing algorithm in the data center architecture VL2 [1] .

In short, ECMP is a load balancing method based on the routing layer. Load balancing based on the IP layer has many advantages, as follows:

(1) The deployment and configuration are simple, and load can be implemented based on the characteristics of many protocols without the need for additional configuration.

(2) Provide a variety of traffic scheduling algorithms, which can be based on hash, weight, or polling.

The simple approach also means there are many flaws, as follows:

(1) It may aggravate link congestion. ECMP will not determine whether the original link is already congested, and will load the traffic, which will cause the originally congested link to become even more congested.

(2) In many cases, the load performance is poor. ECMP cannot distinguish between idle traffic in multiple networks, and ECMP has poor load performance when there is a large traffic gap.

Although this load-bearing method based on the third network layer is easy to use and deploy, it cannot meet the needs of business-level usage and cannot maintain sessions. Therefore, the author will introduce several load-bearing methods above the fourth network layer below.

Part 02, LVS Load

LVS (Linux Virtual Server) is an open source load balancing project led by Dr. Zhang Wensong. Currently, LVS has been integrated into the Linux kernel module. The project implements an IP-based data request load balancing scheduling scheme in the Linux kernel. When a terminal Internet user accesses the company's external load balancing server from the outside, the terminal user's Web request will be sent to the LVS scheduler. The scheduler decides to send the request to a certain Web server at the backend based on its preset algorithm. For example, the polling algorithm can evenly distribute external requests to all servers at the backend. Although the terminal user's access to the LVS scheduler will be forwarded to the real server at the backend, if the real server is connected to the same storage and provides the same service, the end user will get the same service content no matter which real server he accesses. The entire cluster is transparent to the user. Finally, according to the different working modes of LVS, the real server will choose different ways to send the data required by the user to the terminal user. The LVS working mode is divided into NAT mode, TUN mode, and DR mode [2] .

Unlike ECMP, LVS is a session-based layer 4 load balancing. LVS will maintain sessions for different flows based on the upstream and downstream quintuples. Combined with the long-term development of LVS, LVS has many advantages:

(1) Strong load resistance. LVS only distributes data at Layer 4 and does not consume excessive CPU and memory resources.

(2) Low configuration requirements. It can be used normally with simple configuration.

(3) Strong robustness. It has been developed for a long time, has many deployment solutions in the industry, and is highly stable.

At the same time, LVS also has many shortcomings:

(1) Not enough functions. The simple configuration also results in LVS lacking more functions, such as fault migration and add recovery.

(2) NAT mode has limited performance. This is also a problem faced by many Layer 4 loads. I will discuss this in the future.

Part 03, NGINX Load

In addition to being a high-performance HTTP server, NGINX can also provide the function of a reverse proxy WBE server, which means that it is feasible to deploy NGINX as a load balancing server. Of course, the industry has already widely used NGINX as a load balancing server, service cluster, primary and backup links, etc.

NGINX is similar to LVS in that both are based on layer 4 or higher load balancing and can maintain sessions. At the same time, because NGINX works at layer 7 of the network, it is less dependent on the network than LVS load balancing.

Compared with LVS load balancing, NGINX has the following advantages:

(1) Low dependence on the network. As long as the network is accessible, it can be used for load balancing, unlike some LVS modes that require a specific network environment.

(2) Simple installation and fast configuration and deployment.

(3) The NIGINX load can detect internal server failures. In simple terms, if a failure occurs when uploading a file, NIGINX will automatically switch the upload to another load device for processing, but LVS cannot be used in this way.

Likewise, NGINX also has some disadvantages:

(1) There is a lack of dual-machine hot standby solutions, and in most cases, single-machine deployment has certain risks. (2) The high degree of functional adjustment makes its maintenance cost and difficulty higher than LVS.

Part 04: Thinking and Exploration

Combining the advantages and disadvantages of the above common load technologies, it is not difficult to find that each has its own advantages. However, in actual use, the author found that these methods are difficult to meet high-performance cross-network load, that is, to achieve cross-metropolitan area network load under the premise of FULL-NAT. In short, when experimenting with multi-node cloud deployment, these solutions have certain performance poverty.

Based on this, after consulting relevant materials, the author found that Cisco's open source VPP project provides a high-performance load balancer method. Based on DPDK packet sending and receiving, VPP's high-performance processing, after secondary development, it can achieve cross-network high-performance load balancing, and has achieved certain results. The next issue will introduce and discuss this high-performance four-layer cross-network load balancing technology.

In the future, the Smart Home Operation Center will conduct more research on the realization of high-performance cross-network load balancers, and welcome more development architecture personnel to participate in the function development and scenario exploration of high-performance cross-network load balancers.

<<:  Don't let MAC address drift become your nightmare: practical protection and detection methods

>>:  How do Apple devices greet each other? - An interesting talk about the Bonjour protocol

Recommend

What is a wireless access point?

Wireless access points are ubiquitous in modern n...

How much do you know about Zigbee wireless connection?

Zigbee has a wide range of applications and can o...

...

5G meets WiFi on a narrow road, walking hand in hand in a friendly way

Recently, both domestic and foreign operators hav...

HTTP knowledge points, a must-know in the exam

Detailed introduction to http HTTP is the abbrevi...

Exploring the evolution of Ethernet bandwidth for 5G bearer

From the voice services in the 2G era, to the ris...

Let’s talk about protocols and hard drives in the Web3 world: IPFS

In the Web2.0 world, the protocol is usually HTTP...

Have you learned how to configure multiple public IP addresses?

background For some customers working on video an...