Nginx was a very representative open source system in the field of network communication middleware before the emergence of Envoy. It has powerful functions, excellent performance, strong scalability, and has formed a strong ecosystem, becoming the de facto benchmark in the field of HTTP traffic management. As a rising star, although Envoy has many differences in positioning and goals from Nginx, both Envoy and Nginx have many advantages in terms of architectural design. The following will analyze and compare Envoy and Nginx in detail from the dimensions of functional positioning, overall network model, connection processing, request parsing, plug-in mechanism, etc. Through a comprehensive comparison with Nginx at the functional and architectural levels, everyone can also have a more three-dimensional understanding of Envoy's architectural design. 1. Function and positioning The core functions of Nginx are Web server and reverse proxy server. The Web server completes the basic Web server functions such as parsing HTTP request protocol and responding to requests in HTTP protocol format, caching, and log processing; the reverse proxy server completes common proxy functions such as request forwarding, load balancing, authentication, current limiting, caching, and log processing. In addition to supporting the Nginx protocol, Nginx also supports ordinary TCP and UDP reverse proxy functions, and supports common reverse proxies based on layer 4 protocols in a stream manner, such as MySQL proxy, Memcached proxy, etc. Envoy has a relatively ambitious goal. Its goal is to transparently take over the communication traffic between microservices, decouple the communication and service governance functions from microservices, and easily add support for custom protocols through Envoy. In summary, the keywords of Nginx are Web server and reverse proxy, while Envoy transparently takes over traffic, which better reflects the control and management of traffic. In addition, from the perspective of usage, microservices explicitly call Nginx, complete load balancing and other related functions through Nginx, and implicitly call Envoy. Business microservices do not need to be aware of the existence of Envoy, and communicate in the same way as Envoy, except that they no longer need to pay attention to the details of communication and link management. 2. Network Model In terms of network model, Nginx adopts the classic multi-process architecture, which consists of master process and worker process. Among them, the master process is responsible for managing the worker process, including monitoring the running status of the worker process, sending processing signals to the worker process according to some management commands input from the outside, and starting a new worker process when the worker process exits. The worker process is responsible for handling various network events. Each worker process is independent of each other and competes for new connections and requests from the client. In order to ensure efficient request processing, the entire process of a request processing is in the same worker process. It is recommended to configure the number of worker processes to be the same as the number of CPU cores in the current environment. Since the birth of Nginx, it has been using the above classic multi-process architecture. Under this architecture, if a particularly time-consuming operation is encountered during the request processing, such as disk access, third-party service synchronous access, etc., the process processing the request will be blocked, not only the CPU resources are not fully utilized, but also if the blocking time is relatively long, it will not only affect the current request, but also cause a large number of pending requests of this process to time out in serious cases. In order to solve this problem, Nginx has introduced the concept of thread pool since version 1.7.11. If you encounter logic that takes a particularly long time, you can add thread pool configuration and put it into the thread pool for processing. The introduction of the thread pool mechanism is a good supplement to the Nginx architecture. By specifically solving some blocking scenarios that take a particularly long time, Nginx's performance has reached a new height. Unlike Nginx, Envoy uses a multi-threaded network architecture. Envoy generally creates the same number of worker threads as the current number of CPU cores. All worker threads listen to the listeners configured by Envoy at the same time, accept new connections, instantiate the corresponding filter processing chain for each new connection, and process all requests on the connection. Similar to Nginx, the entire process of processing each request of Envoy is carried out under the same thread. From the above analysis, the network processing methods of Envoy and Nginx are roughly similar. Both methods are fully asynchronous programming modes, all operations are performed asynchronously, and each execution context uses a separate event scheduler to schedule and trigger asynchronous events of the execution context. The only difference is the execution context that carries the network. Nginx carries it through multi-process, while Envoy uses multi-threading. Nginx solves the blocking problem in asynchronous programming by means of thread pools, but it still does not fundamentally solve the problem. If you encounter a problem scenario that is not noticed at the design or code level, the subsequent waiting requests will still time out because the current request is blocked and the subsequent waiting requests cannot be processed. Since they are all fully asynchronous programming modes, Envoy will also encounter the same problem, but Envoy has begun to try to solve it. The specific solution is: set a watchdog for each worker thread, and regularly update the latest update time of the watchdog of this thread through a timer. The main thread will monitor whether the watchdog of each worker thread has been updated within a period of time. If it has not been updated for more than a period of time, it can be considered that the watchdog of the thread has no chance to be executed, and it can be inferred that the thread is currently blocked and cannot process request messages. Envoy can detect whether the worker thread is blocked for a long time through this mechanism. Based on this mechanism, corresponding processing can be added later (such as moving the pending request to other threads and then killing the thread), which can solve the problem of worker thread blocking from a mechanism perspective. 3. Connection processing Nginx controls the maximum number of connections that each worker can establish through the worker_connections parameter. From the Nginx network model, we can see that when a client connection comes, all idle processes will compete for the new connection. If this competition causes a process to get a lot of new connections, the idle connections of the process will also be used up quickly. If it is not controlled, the process will encounter no idle connections when trying to get new connections, and some processes will fail to get new connections even though they have idle connections. Is it feasible to directly distribute connections to each process in an equal manner? This approach is actually problematic. The QPS of requests that may be carried on different connections may vary greatly. There may be a phenomenon where two processes handle the same number of connections, but one is very busy and the other is very idle. Therefore, in order to ensure that each worker process can provide its own computing power to the maximum extent, it is necessary to manage the connections in a refined manner. The approach taken by Nginx is that each worker process dynamically adjusts the timing of obtaining new connections according to its own busyness. The specific implementation is: when the current number of connections of this process reaches 7/8 of the maximum worker_connections, this worker process will not try to get the accept lock, nor will it process new connections. In this way, other worker processes have more opportunities to process the listening handle and establish new connections. Moreover, due to the setting of the timeout period, the worker process that has not obtained the lock will obtain the lock more frequently. In this way, Nginx solves the load balancing problem between worker processes. Envoy will also encounter similar load imbalance problems as Nginx. Envoy is currently developing rapidly, and there are many problems that need to be solved. People in the Envoy community feel that the current priority of this issue is not high enough, and this issue will be discussed and solved based on the specific situation in the future. 4. Plug-in mechanism Nginx has powerful plug-in extension capabilities. Based on the plug-in extension mechanism of Nginx, the business can be easily differentiated and customized. Nginx plug-ins are provided in the form of modules. Specifically, Nginx mainly provides the following forms of plug-in extensions: 1) Extend the protocol through the stream mechanism, such as adding memcached protocol proxy and load balancing; 2) Process HTTP requests in Handler mode; 3) Filter HTTP request and response messages, such as modifying and customizing message content; 4) Load balancing when accessing Upstream can provide a custom load balancing mechanism. For the most mature HTTP protocol, Nginx divides the entire request processing process into multiple stages. Currently, there are a total of 11 processing stages, including reading request content and rewriting request addresses. When the business needs to be expanded and customized at a certain stage, you only need to mount the callback function corresponding to that stage. When the Nginx core processes the HTTP request to this stage, it will call back the previously registered callback function for processing. Nginx's support for modules is generally not flexible. Nginx modules must be compiled together with Nginx's own source code, and only currently supported modules can be selected during compilation. It does not support dynamic selection and loading of modules at runtime, which has been complained about a lot. In order to solve this problem, Nginx introduced support for dynamic module loading in version 1.9.11. From then on, it is no longer necessary to replace Nginx files to add third-party module extensions. Nginx also supports Lua extensions. By using the simple and easy-to-use Lua language and its powerful coroutine mechanism, many extension mechanisms can be easily implemented, and the performance can basically meet the requirements. Envoy also provides a powerful plug-in extension mechanism. Currently, the most used areas are monitoring filter plug-ins and network processing filter plug-ins. Compared with Nginx, Envoy network plug-ins are positioned at the protocol level. Taking HTTP protocol as an example, Envoy does not have such a fine-grained plug-in extension mechanism. If you want to extend Envoy's HTTP protocol processing, there are not many extension points currently. Envoy's plug-in currently uses a static registration method. The plug-in code and Envoy code are compiled together. Unlike Nginx, Envoy supports dynamic loading of plug-ins from the very beginning. Envoy's unique XDS API design allows you to customize and modify Envoy's XDS plug-in at any time. Istio pushes the modified XDS configuration to Envoy through Grpc for dynamic loading and effectiveness. In addition, the Envoy community and the Cilium community are currently exploring the use of the user-mode network customization capabilities provided by eBPF to fine-tune the management and expansion of Envoy traffic. Cilium has introduced the Go extension of Envoy since version 1.3, and implemented the registration of the Filter plug-in to Envoy through the Go extension. The main implementation is the OnData() function. When Envoy receives traffic, it will call the plug-in's OnData function for processing. Envoy has also done some exploratory work in Lua extension support, and currently supports the use of Lua scripts to filter and adjust HTTP requests. The Lua script HTTP filter is still in the experimental stage and is not recommended for direct use in production environments. It can only be used in production environments after it is verified to be mature. After maturity, the Lua script mechanism can be used to enhance the extensibility of Envoy in more scenarios. |
<<: "Flooding" or rational construction? Operators should deploy 5G in a rhythmic manner
>>: Why is Telnet insecure? Let's take a look at usernames and passwords
[51CTO.com original article] The sudden outbreak ...
In the digital age where seamless connectivity an...
RAKsmart also offers substantial discounts on var...
The decision to exclude the Chinese manufacturer ...
DiyVM is a Chinese hosting company that has been ...
In 2017, the application of 5G technology has bec...
Preface What happens when we enter a URL into the...
[[428410]] WebSocket is a full-duplex communicati...
80VPS has launched a 50% discount coupon this mon...
After Wi-Fi 6, wireless networks have also ushere...
[[387141]] This article is reprinted from the WeC...
Since I became the editor of the wireless network...
SoftShellWeb is a foreign hosting company registe...
Editor's note: In the real estate market, the...
[[251967]] Recently, the Ministry of Industry and...