TCP access layer load balancing, high availability, and scalability architecture

TCP access layer load balancing, high availability, and scalability architecture

Today, let’s have a systematic discussion about TCP’s load balancing, high availability, and scalability architecture.

[[348897]]

How is the load balancing, high availability, and scalability architecture of the web-server implemented?

In the Internet architecture, web-server access generally uses nginx as a reverse proxy to implement load balancing. The entire architecture is divided into three layers:

  • The upstream call layer is usually a browser or an app;
  • Intermediate reverse proxy layer, nginx;
  • The downstream real access cluster, web-server, common web-servers include tomcat and apache;

What was the whole visit process like?

  • The browser initiates a request to daojia.com;
  • DNS resolves daojia.com to external IP (1.2.3.4);
  • The browser accesses nginx through the external network IP (1.2.3.4);
  • Nginx implements load balancing strategies, common strategies include polling, random, IP-hash, etc.;
  • nginx forwards the request to the web-server with the intranet IP (192.168.0.1);

Due to the short HTTP connection and the stateless nature of web applications, in theory any HTTP request placed on any web-server should be processed normally.

Voice-over: If it must be done on one machine, it means the architecture may be unreasonable and difficult to expand horizontally.

The problem is that TCP is a stateful connection. Once the client and server establish a connection, a request initiated by a client must fall on the same TCP server. How to do load balancing at this time and how to ensure horizontal expansion?

Solution 1: Single machine tcp-server

A single tcp-server can obviously guarantee request consistency:

  • The client initiates a TCP request to tcp.daojia.com;
  • DNS resolves tcp.daojia.com to external IP (1.2.3.4);
  • The client initiates a request to the tcp-server through the external network IP (1.2.3.4);

What are the disadvantages of this solution?

High availability cannot be guaranteed.

Solution 2: Cluster method tcp-server

You can build a tcp-server cluster to ensure high availability and use the client to achieve load balancing:

  • The client is configured with three tcp-server external IP addresses: tcp1/tcp2/tcp3.daojia.com;
  • The client selects the tcp-server in a "random" way, assuming that the one selected is tcp1.daojia.com;
  • Resolve tcp1.daojia.com through DNS; (4) Connect to the real tcp-server through the external network IP;

How to ensure high availability?

If the client finds that a certain TCP server cannot be connected, it chooses another one.

What are the disadvantages of this solution?

Before each connection, you need to perform one more DNS access:

  • Difficult to prevent DNS hijacking;
  • One more DNS access means longer connection time, which is more obvious on mobile phones;

How to fix DNS issues?

Directly configuring the IP on the client can solve the above two problems. Many companies do this, commonly known as "IP express".

What are the new issues with “IP Express”?

The IP address is hardcoded on the client side, and load balancing is implemented on the client side, which results in poor scalability:

  • If the original IP changes, the client will not receive real-time notification;
  • If a new IP is added, i.e. the TCP-sever capacity is expanded, the client will not receive real-time notification;
  • If the load balancing strategy changes, the client needs to be upgraded;

Solution 3: Implement load balancing on the server

Only by sinking complex strategies to the server side can the scalability problem be fundamentally solved.

It is a good idea to add an http interface and put the client's "IP configuration" and "balance strategy" on the server:

  • Before the client accesses the tcp-server each time, it first calls a newly added get-tcp-ip interface. For the client, this http interface only returns the IP address of a tcp-server.
  • This http interface implements the IP balancing strategy of the original client;
  • After getting the IP address of tcp-server, initiate a TCP long connection to tcp-server as before;

In this way, the scalability problem is solved:

  • If the original IP changes, you only need to modify the configuration of the get-tcp-ip interface;
  • If a new IP is added, the configuration of the get-tcp-ip interface is also modified;
  • If the load balancing strategy changes, there is no need to upgrade the client;

However, a new problem arises. If all IPs are placed on the client, when one IP fails, the client can switch to another IP to ensure availability. However, the get-tcp-ip interface only maintains the static tcp-server cluster IPs, and is completely unaware of whether the tcp-servers corresponding to these IPs are available. What should we do?

Solution 4: TCP-server status reporting

How does the get-tcp-ip interface know whether each server in the tcp-server cluster is available? Active reporting by tcp-server is a potential solution. If a tcp-server hangs up, the reporting will be stopped. For a tcp-server that stops reporting, the get-tcp-ip interface will not return the external IP address of the corresponding tcp-server to the client.

What are the problems with this design?

Admittedly, status reporting solves the problem of high availability of tcp-server, but this design makes a small coupling error of "reverse dependency": it makes tcp-server depend on a web-server that has nothing to do with its own business.

Solution 5: Pulling TCP-server status

A better solution is that the web-server obtains the status of each tcp-server by "pulling" it, rather than the tcp-server reporting its own status by "pushing" it.

In this way, each TCP-server is independent and decoupled, and only needs to focus on senior TCP business functions.

Tasks such as high availability, load balancing, and scalability are performed exclusively by get-tcp-ip's web-server.

One more thing, implementing load balancing on the server side has another benefit, which is that it can achieve load balancing of heterogeneous TCP servers and overload protection:

  • Static implementation: The IPs of multiple tcp-servers under the web-server can be configured with load weights, and the load is distributed according to the machine configuration of the tcp-server (nginx also has a similar function);
  • Dynamic implementation: The web-server can dynamically distribute the load according to the status of the "pulled" tcp-server, and implement overload protection when the tcp-server performance drops sharply;

Summarize

(1) How does a web-server implement load balancing?

Use nginx reverse proxy to poll, randomize, and ip-hash.

(2) How does TCP-server quickly ensure request consistency?

Single machine.

(3) How to ensure high availability?

The customer configures multiple tcp-server domain names.

(4) How to prevent DNS hijacking and speed up?

IP through train, the client configures multiple tcp-server IPs.

(5) How to ensure scalability?

The server provides a get-tcp-ip interface, shields the load balancing strategy from the client, and implements convenient capacity expansion.

(6) How to ensure high availability?

tcp-server "pushes" status to the get-tcp-ip interface,

or

get-tcp-ip interface "pull" tcp-server status.

Details are important, but ideas are more important than details.

[This article is an original article by 51CTO columnist "58 Shen Jian". Please contact the original author for reprinting.]

Click here to read more articles by this author

<<:  Do you know the functions of these interfaces on the monitor?

>>:  CommScope’s Viewpoint: Operators’ Network Efficiency Transformation in the 5G Era

Recommend

Half-year review: 10 major Internet company acquisitions in 2018

The networking market has been an area of ​​frequ...

Nginx log analysis: writing shell scripts for comprehensive log statistics

Nginx is a high-performance HTTP and reverse prox...

Wi-Fi 6 Column | How to build university networks in the 5G era (Part 1)

What changes will Wi-Fi 6 and 5G bring to the con...

What is the appropriate number of Goroutines? Will it affect GC and scheduling?

[[387141]] This article is reprinted from the WeC...

Interviewer: What is your understanding of IO multiplexing?

"IO multiplexing" is a common technical...

Huawei's "Government Cloud China Tour" has a unique scenery in Shaanxi

In June, people from all over the world gathered ...