Today, let’s have a systematic discussion about TCP’s load balancing, high availability, and scalability architecture.
How is the load balancing, high availability, and scalability architecture of the web-server implemented? In the Internet architecture, web-server access generally uses nginx as a reverse proxy to implement load balancing. The entire architecture is divided into three layers:
What was the whole visit process like?
Due to the short HTTP connection and the stateless nature of web applications, in theory any HTTP request placed on any web-server should be processed normally. Voice-over: If it must be done on one machine, it means the architecture may be unreasonable and difficult to expand horizontally. The problem is that TCP is a stateful connection. Once the client and server establish a connection, a request initiated by a client must fall on the same TCP server. How to do load balancing at this time and how to ensure horizontal expansion? Solution 1: Single machine tcp-server A single tcp-server can obviously guarantee request consistency:
What are the disadvantages of this solution? High availability cannot be guaranteed. Solution 2: Cluster method tcp-server You can build a tcp-server cluster to ensure high availability and use the client to achieve load balancing:
How to ensure high availability? If the client finds that a certain TCP server cannot be connected, it chooses another one. What are the disadvantages of this solution? Before each connection, you need to perform one more DNS access:
How to fix DNS issues? Directly configuring the IP on the client can solve the above two problems. Many companies do this, commonly known as "IP express". What are the new issues with “IP Express”? The IP address is hardcoded on the client side, and load balancing is implemented on the client side, which results in poor scalability:
Solution 3: Implement load balancing on the server Only by sinking complex strategies to the server side can the scalability problem be fundamentally solved. It is a good idea to add an http interface and put the client's "IP configuration" and "balance strategy" on the server:
In this way, the scalability problem is solved:
However, a new problem arises. If all IPs are placed on the client, when one IP fails, the client can switch to another IP to ensure availability. However, the get-tcp-ip interface only maintains the static tcp-server cluster IPs, and is completely unaware of whether the tcp-servers corresponding to these IPs are available. What should we do? Solution 4: TCP-server status reporting How does the get-tcp-ip interface know whether each server in the tcp-server cluster is available? Active reporting by tcp-server is a potential solution. If a tcp-server hangs up, the reporting will be stopped. For a tcp-server that stops reporting, the get-tcp-ip interface will not return the external IP address of the corresponding tcp-server to the client. What are the problems with this design? Admittedly, status reporting solves the problem of high availability of tcp-server, but this design makes a small coupling error of "reverse dependency": it makes tcp-server depend on a web-server that has nothing to do with its own business. Solution 5: Pulling TCP-server status A better solution is that the web-server obtains the status of each tcp-server by "pulling" it, rather than the tcp-server reporting its own status by "pushing" it. In this way, each TCP-server is independent and decoupled, and only needs to focus on senior TCP business functions. Tasks such as high availability, load balancing, and scalability are performed exclusively by get-tcp-ip's web-server. One more thing, implementing load balancing on the server side has another benefit, which is that it can achieve load balancing of heterogeneous TCP servers and overload protection:
Summarize (1) How does a web-server implement load balancing? Use nginx reverse proxy to poll, randomize, and ip-hash. (2) How does TCP-server quickly ensure request consistency? Single machine. (3) How to ensure high availability? The customer configures multiple tcp-server domain names. (4) How to prevent DNS hijacking and speed up? IP through train, the client configures multiple tcp-server IPs. (5) How to ensure scalability? The server provides a get-tcp-ip interface, shields the load balancing strategy from the client, and implements convenient capacity expansion. (6) How to ensure high availability? tcp-server "pushes" status to the get-tcp-ip interface, or get-tcp-ip interface "pull" tcp-server status. Details are important, but ideas are more important than details. [This article is an original article by 51CTO columnist "58 Shen Jian". Please contact the original author for reprinting.] Click here to read more articles by this author |
<<: Do you know the functions of these interfaces on the monitor?
>>: CommScope’s Viewpoint: Operators’ Network Efficiency Transformation in the 5G Era
The networking market has been an area of frequ...
Nginx is a high-performance HTTP and reverse prox...
What changes will Wi-Fi 6 and 5G bring to the con...
First show the mind map of this article: TCP, as ...
LOCVPS has launched a promotional event for July ...
BandwagonHost recently added VPS products for Chi...
Justhost.ru recently launched its 22nd VPS node: ...
The wave of digitalization is driving the world e...
The three major operators released their operatin...
The word 5G is "very hot". The topic of...
Time flies, and in the blink of an eye, 2024 is o...
Globally, South Korea was the first country to co...
[[387141]] This article is reprinted from the WeC...
"IO multiplexing" is a common technical...
In June, people from all over the world gathered ...