API Gateway The reason for the emergence of API gateways is the emergence of microservice architecture. Different microservices generally have different network addresses, and external clients may need to call multiple service interfaces to complete a business requirement. If the client is allowed to communicate directly with each microservice, there will be the following problems:
The above problems can be solved with the help of API gateway. API gateway is the middle layer between the client and the server. All external requests will first pass through the API gateway layer. In other words, the implementation of API is more concerned with business logic, while security, performance, and monitoring can be left to the API gateway. This improves business flexibility while not lacking security. The typical architecture diagram is shown in the figure: The advantages of using API Gateway are as follows:
NGINX Service Nginx consists of a kernel and modules. The kernel is designed to be very small and concise, and the work it does is also very simple. It only searches for configuration files to match the URL of the client request, and then starts different modules to complete the corresponding work. The following diagram reflects the general processing flow of HTTP requests: Nginx modules are compiled directly into Nginx, so they belong to static compilation. After starting Nginx, Nginx modules are automatically loaded, unlike Apache, which first compiles the module into an so file and then specifies whether to load it in the configuration file. When parsing the configuration file, each Nginx module may process a request, but the same request can only be processed by one module. After Nginx is started, there will be a Master process and multiple Worker processes. The Master process and the Worker process interact through inter-process communication, as shown in the figure. The blocking point of the Worker process is at the I/O multiplexing function call such as select(), epoll_wait(), etc., waiting for data read/write events to occur. Nginx uses an asynchronous non-blocking method to handle requests, that is, Nginx can handle thousands of requests at the same time. The number of requests that a Worker process can handle simultaneously is only limited by the memory size, and in terms of architectural design, there is almost no synchronization lock restriction when handling concurrent requests between different Worker processes, and the Worker process usually does not enter the sleep state. Therefore, when the number of processes on Nginx is equal to the number of CPU cores (*** each Worker process is bound to a specific CPU core), the cost of switching between processes is minimal. Zuul Zuul is Netflix's open source microservice gateway component, which can be used with Eureka, Ribbon, Hystrix and other components. The core of Zuul is a series of filters that can perform the following functions:
The features mentioned above are not available in Nigix. This is because Netflix created Zuul to solve many problems in the cloud (especially to help AWS solve these feature implementations in cross-region situations), rather than just making a reverse proxy similar to Nigix. Of course, we can only use the reverse proxy function, which will not be described here. Zuul1 is built based on the Servlet framework. As shown in the figure, it uses blocking and multi-threading methods, that is, one thread handles a connection request. This method will cause an increase in surviving connections and threads when there are severe internal delays and many device failures. The big difference of Zuul2 is that it runs on an asynchronous and non-blocking framework, with one thread per CPU core, processing all requests and responses. The life cycle of requests and responses is handled through events and callbacks, which reduces the number of threads and therefore has less overhead. Since the data is stored in the same CPU, the CPU-level cache can be reused, and the delay and retry storm problems mentioned above are also greatly alleviated by storing the number of connections and events in queues (much lighter than thread switching, so naturally less consuming). This change will definitely greatly improve performance, and we'll see the results in the following test phase. Today we are talking about API gateway performance, which also involves high availability. Let's briefly introduce Zuul's high availability feature. High availability is very critical because the traffic from external requests to backend microservices will pass through Zuul. Therefore, in production environments, it is generally necessary to deploy highly available Zuul to avoid single point failures. Generally, we have two deployment solutions: 1. Zuul client registers with Eureka Server This is a relatively simple case. You only need to register multiple Zuul nodes with the Eureka Server to achieve high availability of Zuul. In fact, the high availability in this case is no different from the high availability solutions for other services. Let's look at the following figure. When the Zuul client is registered with the Eureka Server, you only need to deploy multiple Zuul nodes to achieve high availability. The Zuul client will automatically query the Zuul Server list from the Eureka Server, and then use the load balancing component (such as Ribbon) to request the Zuul cluster. 2. Zuul client cannot register with Eureka Server If our client is a mobile APP, it is impossible to register with Eureka Server through Solution 1. In this case, we can achieve high availability of Zuul through additional load balancers, such as Nginx, HAProxy, F5, etc. As shown in the figure, the Zuul client sends the request to the load balancer, and the load balancer forwards the request to one of the Zuul nodes of its proxy, so that the high availability of Zuul can be achieved. Spring Cloud Although Spring Cloud has "Cloud" in its name, it is not a solution for cloud computing. Instead, it is a toolset built on Spring Boot to quickly build common patterns for distributed systems. Applications developed using Spring Cloud are very suitable for deployment on Docker or PaaS, so they are also called cloud-native applications. Cloud-native can be simply understood as a software architecture for cloud environments. Since it is a toolset, it must contain many tools. Let's look at the following picture: Since this article only involves the comparison of API gateways, I will not introduce other tools one by one. Spring Cloud has integrated Zuul, but from Zuul's perspective, there are no major changes. However, the entire Spring Cloud framework has been integrated with components and provides far more functionality than Netflix Zuul, so there may be differences when comparing. Service Mesh - Linkerd I think Dr. Turgay Celik used Linkerd as one of the comparison objects because Linkerd provides elastic Service Mesh for cloud-native applications, and Service Mesh can provide lightweight and high-performance network proxy and also provide microservice framework support. From the introduction, linkerd is our open source RPC proxy for microservices. It is directly based on Finagle (Twitter's internal core library, responsible for managing the communication process between different services. In fact, every online service of Twitter is built on Finagle, and it supports hundreds of millions of RPC calls per second). The design goal is to help users simplify the operation and maintenance under the microservice architecture. It is a dedicated service-to-service communication infrastructure layer. Similar to Spring Cloud, Linkerd also provides load balancing, circuit breakers, service discovery, dynamic request routing, retries and offline, TLS, HTTP gateway integration, transparent proxy, gRPC, distributed tracing, operation and maintenance, and many other functions. It is quite comprehensive and adds another feature to the technology selection of microservice framework. Since I have not come into contact with Linkerd, I cannot analyze it from the architectural level for the time being. I will add more content in this regard later and make a technology selection myself. Performance Test Results Dr. Turgay Çelik's article uses Apache's HTTP server performance evaluation tool AB as the test tool. Note that since he conducted the test based on the Amazon (AWS) public cloud, the test results may differ from your actual physical machine. In the experiment, two machines, the client and the server, were started, and multiple services to be tested were installed on each machine. The client accessed the services in several ways and tried to obtain resources. The test plan is shown in the figure below: Three environments were selected for the test, namely:
During the test, 200 parallel threads were used to send a total of 10,000 requests. The command template is as follows:
Note: Since Dr. Turgay Çelik's test was based on Zuul 1, the performance is poor and cannot truly reflect the performance of the current Zuul version. From the above results, in a single-core environment, Zuul has the worst performance (950.57 times/s), direct access has the best performance (6519.68 times/s), and Nginx reverse proxy loses 26% of the performance (4888.24 times/s) compared to direct access. In a dual-core environment, Nginx's performance is nearly 3 times stronger than Zuul's (6187.14 times/s and 2099.93 times/s, respectively). In a stronger test environment (8 cores), there is not much difference between direct access, Nginx, and Zuul, but Spring Cloud Zuul may have only 873.14 requests per second due to internal overall consumption. Final Conclusion From a product perspective, the API gateway is responsible for service request routing, composition, and protocol conversion. All client requests first pass through the API gateway, which then routes the request to the appropriate microservice. The API gateway often processes a request by calling multiple microservices and merging the results. It can convert between Web protocols (such as HTTP and WebSocket) and non-Web-friendly protocols used internally, so it is still very useful. Therefore, the selection of technical solutions is also important for the entire system. From my understanding of the design principles of these four components, the design pattern of Zuul1 is similar to Nigix. Each I/O operation selects one from the worker thread for execution, and the request thread is blocked until the worker thread is completed. However, the difference is that Nginx is implemented in C++, Zuul is implemented in Java, and the JVM itself has a slow loading time. The performance of Zuul2 will definitely be greatly improved compared to Zuul1. In addition, Zuul's first test performance was poor, but it has been much better since the second time, which may be caused by JIT (Just In Time) optimization. As for Linkerd, it is a gateway design that is sensitive to resources, so comparing it with other gateway implementations in a general environment may result in inaccurate results. |
<<: WiFi will be replaced in the future, what do you think?
>>: How will operators charge in the 5G era? IT leaders say...
It is globally recognized that 5G is the trend of...
The latest research shows that 5G networks are fa...
Computers are hundreds of miles apart, but the la...
CloudCone has launched several special packages f...
Although it is the end of February, RackNerd has ...
In this section, Rui Ge will continue to show you...
Nowadays, surfing the Internet with mobile termin...
[[422542]] According to the Gongxin Micro Newspap...
RepriseHosting has updated some promotion informa...
As 5G network construction accelerates, related a...
This article is compiled by community volunteer C...
As an important part of my country's "ne...
5G communication networks are reportedly faster t...
[51CTO.com original article] 2017 has quietly pas...
I searched the blog and found that HostXen's ...