Trip.com QUIC high availability and performance improvements

Trip.com QUIC high availability and performance improvements

First, the QUIC multi-process deployment architecture was introduced, and then the problems encountered by the QUIC network architecture in production applications and its optimization solutions were analyzed. In terms of performance improvement, the implementation ideas and gains of QUIC full-link point monitoring, the development and tuning ideas of QUIC congestion control algorithms, etc. were shared. I hope that these contents can help everyone understand the QUIC protocol and its optimization ideas in practical applications, and gain inspiration from it.

1. Introduction

1.1 Introduction to QUIC in Trip.com APP

QUIC (Quick UDP Internet Connections) is a UDP-based transport layer protocol proposed by Google and is the standard transport layer protocol for HTTP3. Compared with the TCP protocol, QUIC has the following advantages:

1) Multiplexing: QUIC allows multiple data streams to be transmitted in parallel on a single connection, solving the TCP head-of-line blocking problem and thus improving transmission efficiency;

2) Fast connection establishment: When establishing a new connection, the QUIC handshake is parallel to the TLS handshake, avoiding the 1 RTT delay consumed by the TLS handshake alone. When the user connection expires and the connection is established again within the validity period of the PSK (pre-shared key), QUIC verifies the PSK and reuses the TLS session to achieve 0-RTT connection, so that the connection can be established faster. This feature is particularly beneficial in scenarios with short connections and low-frequency requests from a single user;

3) Connection migration: TCP identifies a connection through a 4-tuple. When any part of the 4-tuple changes, the connection will become invalid. QUIC uniquely identifies a connection through a connection CID. When the user network switches (for example, WIFI switches to 4G), QUIC can still find the connection through the CID, complete the new path verification and continue to maintain the connection communication, avoiding the time-consuming reconnection.

4) Congestion control: TCP's congestion control requires the support of the operating system, with high deployment costs and long upgrade cycles. QUIC implements the congestion control algorithm at the application layer, making upgrades more flexible.

The above-mentioned high-quality features have promoted the standardization of the QUIC protocol in IETF. In May 2021, IETF launched the standardized version of QUIC RFC9000. In 2022, we completed the implementation of the QUIC multi-process deployment solution in Trip.com APP, supporting connection migration and 0-RTT features under multi-process, and finally achieved a 20% reduction in the link time of Trip.com APP, greatly improving the experience of overseas users. Our initial network architecture is as follows:

There are two important components, QUIC SLB and QUIC Server:

  • QUIC SLB works at the transport layer and has load balancing capabilities. It is responsible for receiving and correctly forwarding UDP data packets to the server. When the user switches the network, causing the connection quadruple to change, SLB extracts the server's ip+port from the connection CID to accurately forward the data packet, thereby supporting the connection migration function;
  • QUIC Server works at the application layer and is the main implementation of the QUIC protocol. It is responsible for forwarding and responding to client requests and implementing the 0-RTT function through the server cluster sharing ticket_key solution.

1.2 QUIC high availability and performance improvement

With the recovery of the global tourism industry, Ctrip's business at home and abroad has grown exponentially, and the business volume is getting bigger and bigger. As the main network channel of Trip.com APP, QUIC is of great importance. In order to better cope with the growing traffic and more stably support the delivery of each user request, we have established the goal of QUIC high availability and performance improvement, and finally completed the following optimization content:

QUIC cluster and link high availability optimization:

  • Completed the QUIC Server containerization transformation, equipped with HPA capabilities, and customized HPA indicators suitable for QUIC scenarios
  • Optimize the QUIC network architecture, and enable active health monitoring and dynamic online and offline capabilities for QUIC Server
  • Through the push-pull combination strategy, the client App disaster recovery capability is effectively improved, and the network channel and entry IP are switched in seconds.
  • Build a stable and reliable monitoring and alarm system

QUIC success rate and link performance improvement:

  • Support QUIC full-link tracking, making QUIC runtime data more transparent
  • Further improvement of link performance was achieved by optimizing the congestion control algorithm
  • The multi-region deployment shortened the link time for European users by 20%.
  • The client Cronet is upgraded, and the network request speed is significantly improved

QUIC application scenario expansion:

  • Supports Ctrip Travel App and Business Travel Overseas App to access QUIC, greatly improving the network success rate and performance of domestic users and business travel users in overseas scenarios

These optimizations are described in detail below.

2. QUIC network architecture upgrade

2.1 Containerization

Before the transformation, we uniformly used VM to deploy QUIC SLB and QUIC Server. The specific deployment process is as follows:

In the early days of QUIC practice, we often needed to perform custom operations on the machine. The advantage of this deployment solution is that it is more flexible. However, as the QUIC server functions become more stable and business traffic grows, the disadvantages of this solution, such as long deployment time and lack of support for dynamic expansion and contraction, become increasingly apparent. To solve the above problems, we containerized QUIC SLB and QUIC Server:

  • QUIC Server carries two core functions: QUIC protocol processing and user http request forwarding. We transformed it into a container image and connected it to the internal Captain release system. It supports grayscale release, version rollback and other functions, reducing the risks brought by release. At the same time, it has HPA capabilities, and the expansion and contraction time is shortened from minutes to seconds.

  • As an external network entrance, QUIC SLB is mainly responsible for forwarding user UDP packets. It has a small load and is not sensitive to traffic changes. And because Akamai acceleration is required, QUIC SLB needs to support both UDP and TCP protocols. The current container cannot support dual-protocol external network entrances. Therefore, we transformed QUIC SLB into a virtual machine image, which supports one-click expansion on PaaS, greatly reducing deployment costs.

2.2 Service Discovery and Active Health Monitoring

In QUIC SLB, we use Nginx as a layer 4 proxy to implement QUIC UDP packet forwarding and connection migration capabilities.

In the early stage after containerization, we use Consul as the registration center of QUIC Server. The server will call Consul's registration and deletion APIs in the lifecycle functions postStart and preStop provided by k8s to register or remove its own IP in Consul. QUIC SLB will monitor the changes of IP in Consul, so as to perceive the status of each QUIC Server in time and update it to the Nginx configuration file in real time, thus realizing the automatic registration and discovery of QUIC Server.

However, in the actual practice scenario, we found that when a fault is injected directly into the QUIC Server, the preStop API call will not be triggered because the pod where the Server is located is not destroyed. The faulty Server cannot remove its own IP in Consul, causing the QUIC SLB to be unable to perceive the Server's offline state. Therefore, the faulty Server IP will still be retained in the nginx.conf of the QUIC SLB. This situation has different effects when Nginx is used as a TCP proxy and a UDP proxy:

  • When Nginx acts as a TCP proxy, a TCP connection is established between Nginx and the faulty server. When the server fails, the TCP connection is disconnected. Nginx will try to reconnect with the faulty server but eventually fail. At this time, it will automatically pull it out for a period of time and detect it at regular intervals until it recovers, thus avoiding forwarding TCP data packets to the wrong server and causing a decrease in the service success rate.
  • When Nginx acts as a UDP proxy, since UDP is connectionless, QUIC SLB will still forward data packets to the faulty server, but SLB will not receive any response data packets. Due to the characteristics of the UDP protocol, QUIC SLB will not determine that the server is abnormal at this time, so a large number of UDP packets will continue to be forwarded to the faulty server instance, resulting in a significant decrease in the success rate of the QUIC channel;

After the above analysis, we know that using UDP for health monitoring has certain drawbacks, and we hope to use the TCP protocol to actively detect the health of the QUIC Server. Therefore, we started exploring new solutions, which need to support UDP data forwarding, active health monitoring based on the TCP protocol, support service discovery and registration, and be able to better adapt to the QUIC SLB layer.

After investigating many solutions, the most suitable one is the open source Nginx UDP Health Check project, which supports both UDP packet forwarding and TCP active health detection. However, it does not support dynamic changes of downstream IP in nginx.conf, that is, it does not support dynamic up and down of QUIC Server, which directly affects the HPA capability of the Server cluster. Therefore, this solution was abandoned.

Finally, through research, we found that the company's internal L4LB component can not only support TCP active health detection and UDP packet forwarding, but also support dynamic online and offline instances, which perfectly adapts to our scenario. Therefore, we finally adopted L4LB as the forwarding hub between QUIC SLB and QUIC Server.

The specific implementation is to apply for a UDP intranet L4LB entry IP for each group of QUIC Server. These IPs are fixed, so for QUIC SLB, it is only necessary to forward UDP data packets to the fixed virtual IP. L4LB enables the health detection function of TCP. In this way, when the QUIC Server instance in the group fails, the health detection fails, and L4LB will pull this instance out. Subsequent UDP packets will no longer be forwarded to this instance until the instance returns to a healthy state. This perfectly solves the automatic registration and active health monitoring functions of QUIC Server.

2.3 Push-pull combination method improves client user side network disaster recovery capability

The network request framework of Trip.com App supports three channels of QUIC/TCP/HTTP at the same time. More than 80% of user requests access the server through the QUIC channel, with an average daily traffic of hundreds of millions. On the basis of the existing multi-channel/multi-IP switching capabilities, it is particularly important to further improve the disaster recovery capabilities. Therefore, we designed a set of push-pull combined strategy solutions, combined with the company's configuration system to achieve channel/IP switching in seconds. The following is a simplified process:

In scenarios such as client App startup and foreground and background switching, the latest configuration is obtained according to the changes, and the network framework performs lossless channel or IP switching based on the latest configuration.

At the same time, when the user's APP is in the active foreground, by actively pushing configuration updates to users, online users can immediately perceive the changes and switch to the latest network configuration, and this switching process is imperceptible to users.

In this way, our QUIC client network framework further improves the disaster recovery capability. When an IP fails, all users can be notified within seconds to switch away from the faulty IP. When an abnormality occurs in a channel, users can also switch to high-quality channels without being affected in any way.

2.4 Ensure monitoring alarm stability and build elastic expansion and contraction indicators

The stability of the QUIC data monitoring system plays a vital role in fault warning and fault response. The output of QUIC runtime data is completed by writing the data to the access log and error log, and then the local log data of the server is sent to Kafka through logagent, and then Hangout consumes and parses it, and the data falls into Clickhouse for data storage and query, which provides great convenience for us to observe and analyze data at runtime.

After completing the QUIC network architecture upgrade, we encountered the following two problems when relying solely on the above log system:

First, in the application scenario, if monitoring and alarming are based solely on this part of data, the monitoring and alarming may be inaccurate due to fluctuations in some intermediate links. For example, a Hangout consumer failure may cause a sudden drop or increase in traffic, which may affect the timeliness and accuracy of monitoring and alarming.

Second, containerization enables elastic scaling capabilities, while HPA relies on scaling data indicators. Using only resource indicators such as CPU and memory utilization cannot fully reflect the status of the QUIC server. According to the characteristics of the QUIC service, some HPA data indicators still need to be customized, such as the percentage of idle connections and the percentage of idle port numbers, in order to establish a more reasonable and stable scaling dependency;

Based on the above two considerations, after investigation, we integrated Nginx with Prometheus and supported the reporting of key data indicators and expansion and contraction data indicators through Prometheus. In Nginx, important indicators such as success rate, time consumption, number of available connections, etc. are aggregated in advance, and only aggregated data indicators are reported, which greatly reduces the data volume and makes the impact of Nginx integration of Prometheus negligible. In addition, we support HPA indicators such as the percentage of idle connections and the percentage of idle port numbers, so that the QUIC cluster can complete system expansion very accurately and quickly during peak traffic periods, and can also shrink to an adaptive state during low traffic periods to save machine resources to the greatest extent.

In this way, the monitoring and alarm data sources of the QUIC system support both Prometheus and Clickhouse. Prometheus focuses on reporting key indicators and aggregated data, while Clickhouse focuses on reporting detailed data during runtime. The two cooperate and complement each other.

In the process of supporting Prometheus, we encountered many compilation problems caused by the version matching of dependencies. Taking nginx/1.25.3 as an example, the version matching results are given:

Component Name

describe

Version

Download Link

nginx-quic

nginx official library

1.25.3

https://github.com/nginx/nginx/releases/tag/release-1.25.3

nginx-lua-prometheus

lua-prometheus syntax library

0.20230607

https://github.com/knyar/nginx-lua-prometheus/releases/tag/0.20230607

luajit

lua just-in-time compilation

v2.1-20231117

https://github.com/openresty/luajit2/releases/tag/v2.1-20231117

lua-nginx-module

lua-nginx framework

v0.10.25

https://github.com/openresty/lua-nginx-module/releases/tag/v0.10.25

ngx_devel_kit

lua-nginx dependent development packages

v0.3.3

https://github.com/vision5/ngx_devel_kit/releases/tag/v0.3.3

lua-resty-core

lua-resty core module

v0.1.27

https://github.com/openresty/lua-resty-core/releases/tag/v0.1.27

lua-resty-lrucache

lua-resty lru cache module

v0.13

https://github.com/openresty/lua-resty-lrucache/releases/tag/v0.13

3. Full Link Burying Points

3.1 Implementation

Based on the goal of optimizing user link time and finding and optimizing time-consuming shortcomings, we began to sample and analyze requests that took a long time, and gradually added more data points in the server access.log as needed. Nginx officially supports data points for a single http request dimension, but it is difficult to see the various types of data of the connection to which the request belongs by only analyzing the information of a single request dimension. It is still necessary to observe the network environment of the user connection, handshake details, data transmission details, congestion conditions and other data to help locate the problem.

Previously, the QUIC client only had end-to-end overall tracking data, and there were mapping issues between the QUIC tracking system and the existing system. We collected and filtered the information of Cronet metrics and integrated it into the existing tracking system.

3.1.1 Collect and filter QUIC client Cronet Metrics data

The end-to-end process supports fine-grained tracking of links such as DNS, TLS handshake, request sending, and response returning, and the data of each link of QUIC end-to-end is clear at a glance.

3.1.2 Modify the server nginx source code

We have carried out detailed data tracking in the entire life cycle from connection creation to connection destruction. In addition, we have realized the concatenation of connection-level tracking and request-level tracking data through connection CID, which provides reliable data support for problem location, performance optimization, etc. The following lists some server-side full-link tracking points by category and briefly summarizes their uses:

1) Connection life cycle timeline

Connection type (1-rtt/0-rtt), connection creation time (the time when the server receives the first data packet from the client), the time when the connection sends the first data packet, the time when the connection receives the first ack frame, the connection handshake time, the time when the connection receives and sends the cc frame (Connection Close Frame), the connection no response timeout, the connection destruction time, etc.; this type of tracking points mainly helps us sort out the key time points in the user connection life cycle, as well as the time-consuming details related to the handshake.

2) Data transmission details

(All of the following are within the life cycle of the connection) The total number of bytes, data packets, and data frames sent and received, the packet retransmission rate, the frame retransmission rate, etc. This type of data helps analyze our data transmission characteristics and provides data reference for link transmission optimization and congestion control algorithm adjustment.

3) RTT (Round-trip time) and congestion control data

Smoothed RTT, minimum RTT, first and last RTT, congestion window size, maximum in_flight, slow start threshold, congestion recovery_start time, etc. These data can be used to analyze user network conditions, observe congestion ratios, evaluate the rationality of congestion control algorithms, etc.

4) User Information

Client IP, country and region, etc. This helps us to perform regional-level aggregate analysis of user data and find regional differences in network transmission so as to make some targeted optimizations.

3.2 Analysis and Mining

By merging and aggregating various data points, we achieved QUIC runtime data visualization and transparency, which also helped us discover many problems and optimization items. The following are a few of them for detailed description:

3.2.1 0-rtt connection survival time is abnormal, resulting in repeated request problems

By screening 0-rtt type connections, we observed that the total survival time of such connections is exactly equal to the max_idel_timeout negotiated by the QUIC client and server, and the exact definition of max_idel_timeout is "connection no response (no data interaction between the client and the server) timeout closing time", that is, under normal circumstances, after the last http request interaction on a connection is completed, if no other interaction occurs after the max_idel_timeout time, the connection will enter the closing process; when there are continuous request interactions on the connection, the connection survival time must be greater than max_idel_timeout (actual connection survival time = last request data transmission completion time - connection creation time + max_idel_timeout).

To demonstrate the above phenomenon, we used the DCID of the connection to filter out all the HTTP request lists in the life cycle of the 0-rtt connection, and found that even if the requests on the connection are constantly being processed, the 0-rtt connection will still be unconditionally closed after surviving the max_idel_timeout, so we concluded that there is a problem with the life extension logic of the 0-rtt connection. We read and analyzed the nginx-quic source code and finally located and fixed the problem code in time:

In the source code ngx_quic_run() function, there are two connection-related timers:

Both will affect the closing of the connection. The c->read timer has a life extension logic, which will refresh the timer as requests continue to occur during the life cycle of the connection. There is no life extension logic for qc->close in the source code. There is only one deletion logic, that is, during the execution of the ngx_quic_handle_datagram() function, if ssl initialization is completed, ngx_quic_do_init_streams() is called to delete the qc->close timer.

  • If the connection is 1-rtt, the first execution of ngx_quic_handle_datagram() function will not complete SSL initialization, so the creation of qc->close occurs before SSL initialization is completed, and the deletion logic can be completed normally in the subsequent data packet interaction process;
  • If the connection is established with 0-rtt, the first execution of the ngx_quic_handle_datagram() function will complete the ssl initialization logic, so the only one deletion logic occurs before the qc->close timer is set, so qc->close cannot be removed normally, resulting in the connection being closed immediately when the max_idel_timeout time is reached;

In addition to causing many invalid 0-rtt new connections, this bug can also cause duplicate requests in our application scenario after aggregate analysis of the full-link tracking data. The following are the reasons for duplicate requests:

The quic client initiates the first request of a request, which is received by the quic server and forwarded to the backend application. Before the server receives the response from the backend application, the qc->close timer expires, causing the server to immediately send a cc frame to the client. After receiving the cc frame, the client should unconditionally close the current connection immediately according to the quic protocol. Therefore, the client believes that the first request has failed, and thus initiates a new connection and starts the second request, which leads to the problem of repeated requests. From the client business perspective, this process is only requested once, but the backend application receives two identical requests in a row, which has a greater impact on interfaces with high idempotence requirements.

We discovered and fixed the above timer bug in February 2024. After the fix, the connection reuse ratio increased by 0.5%, and the unnecessary 0-rtt connection ratio decreased by 7%. The official branch of nginx-quic also submitted a fix for this issue on 2024/04/10. The commit link is: https://hg.nginx.org/nginx-quic/rev/155c9093de9d

3.2.2 Client App Cronet upgrade brings improved experience to 95 line users

After data analysis, it was found that the proportion of long-tail users on 0-RTT was not high, and most of them were new 1-RTT connections. After analysis, it was determined that this might be related to the Cronet pruning of the QUIC client. The Cronet library on the Trip.com App used an old version from 2020 before optimization, and it was pruned due to packet size issues (for example: PSK-related logic was reconstructed and converted to session level), and more than 100,000 lines of useless code were removed as much as possible while retaining key functions. At the same time, after communicating with other Cronet users, it was found that Cronet had a good performance improvement after the upgrade, so after nearly four years, the client made a major upgrade to the Cronet library.

In addition, since Chromium officially announced in November 2023 that it would no longer provide Cli tools for iOS, the goal of this upgrade was to select a version as close as possible to the version before the official deletion of the iOS build tool. In the end, we selected 120.0.6099.301.

After the Cronet upgrade and related adaptability modifications, a comparison was made between the online versions before and after the upgrade. After the upgrade, the time consumption of 95-line requests on the user side was reduced by 18%.

3.2.3 The congestion control algorithm implementation in the Nginx-quic branch has a large room for optimization

Through the study of nginx-quic source code and the data analysis of link tracking points, we found that the congestion control algorithm in the source code is a simplified version of the Reno algorithm, and the initial transmission window is set to a large value of 131054 bytes (if the mtu is 1200, there are 109 packets that can be transmitted simultaneously at the beginning). If a congestion event occurs, the minimum value of the window is still 131054 bytes. When the network is good, the network fairness is not friendly, and when the network is poor, this will aggravate network congestion. After discovering this problem, we began to transform the congestion control algorithm logic in the source code. This optimization content will be detailed in the fourth part.

4. Exploration of Congestion Control Algorithms

In the official branch of nginx-quic, the implementation of the congestion control algorithm is still at the demo level. We combined the characteristics of QUIC to implement the congestion control algorithm at the application layer without relying on the operating system, and abstracted and reconstructed the congestion control-related logic in the Nginx official code to facilitate the expansion of various algorithms. It also supports configurable congestion control algorithm switching, and can configure different congestion control algorithms according to different network conditions and different server business scenarios.

4.1 Introduction to Congestion Control Algorithms

The current mainstream congestion control algorithms can be roughly divided into two categories: one is to respond based on packet loss, such as Reno and Cubic, and the other is to respond based on bandwidth and delay feedback, such as the BBR series. Here is a brief introduction to the working principles, applicable scenarios, and advantages and disadvantages of Reno, Cubic, and BBR:

1) The Reno algorithm is one of the earliest congestion control algorithms of TCP, and is a congestion control mechanism based on packet loss. It uses two thresholds (slow start threshold and congestion avoidance threshold) to control the sending rate. In the slow start phase, the sender doubles the congestion window size every round trip time (RTT). Once congestion occurs, the congestion avoidance phase will be triggered, and the sending rate will slowly increase. When packet loss occurs, the sender will think that congestion has occurred and halve the congestion window size; it is suitable for low-latency and low-bandwidth scenarios, and is more suitable for early Internet environments. Its advantages are simplicity and intuitiveness, easy to understand and implement. The disadvantages are that it reacts slowly to network changes, which may lead to low network utilization and poor performance when the packet loss rate is high.

2) The Cubic algorithm is an improvement on the Reno algorithm. It uses the network round-trip time (RTT) and the rate of change of the congestion window to calculate the size of the congestion window, uses a quasi-cubic function to simulate the congestion state of the network, and adjusts the growth rate of the congestion window according to the size and time of the congestion window. It is suitable for network environments with moderate packet loss rates and has good performance in mainstream Internet environments. The advantage is that compared with the Reno algorithm, it can better adapt to network changes and improve network utilization. The disadvantage is that in a high packet loss rate or long fat pipe environment, the sending window may quickly converge to a very small size, resulting in performance degradation.

3) The BBR (Bottleneck Bandwidth and RTT) algorithm is a congestion control algorithm developed by Google. It estimates the degree of network congestion by measuring the bandwidth and round-trip time of the network, and adjusts the sending rate according to the degree of congestion. The characteristic of the BBR algorithm is that it can estimate the degree of network congestion more accurately, avoid over-congestion and under-congestion, and improve the transmission speed and stability of the network. It is suitable for network environments with high bandwidth, high latency or moderate packet loss rate. The advantage is that it can control the sending rate more accurately, improve network utilization and stability. The disadvantage is that it may occupy additional CPU resources, affect performance, and there may be fairness issues in some cases.

4.2 Optimization Implementation and Benefits

The congestion control logic code in the source code is scattered in various functional codes and is not abstracted and managed uniformly. We sort out the congestion control algorithm response events and abstract each event function as follows:

Based on the above abstract structure, we have successively implemented the current mainstream congestion control algorithms Reno, Cubic and BBR, and we have tuned the parameters and logic of the congestion control algorithms according to the embedded data, including setting a reasonable initial window and minimum window, setting the optimal congestion window reduction logic, etc. These adjustments will cause changes in the packet retransmission rate and connection congestion rate data, and these data changes will affect the link transmission performance.

By optimizing the QUIC congestion control algorithm, we reduced the SHA environment connection congestion ratio by 15 points, achieving a 4% reduction in SHA end-to-end time. In the future, we will continue to develop adaptive customization based on Trip.com's data transmission characteristics and the different network conditions of each IDC, and explore the optimal congestion control logic for each IDC through long-term AB experiments.

V. Achievements and Prospects

We are constantly optimizing QUIC link performance and improving QUIC channel stability, aiming to provide high-quality network services for Trip.com's growing business. At the same time, we are also constantly exploring support for more QUIC application scenarios.

1) Through containerization transformation, the capacity expansion and contraction is changed from manual to automatic, and the expansion and contraction time is shortened by 30 times. A large number of servers can be pulled up and put online within 20 seconds;

2) Through the development of full-link tracking points, a number of optimization items were found through aggregation analysis, the 0-rtt connection problem was fixed, the connection reuse ratio was increased by 0.5%, the congestion control algorithm was optimized, and the end-to-end time was shortened by 4%;

3) By deploying a QUIC cluster in FRA (Frankfurt), the time consumption of European users was reduced by more than 20% and the network success rate was increased by more than 0.5%;

4) Support for Ctrip Travel App and Business Travel Overseas App to access QUIC, which greatly improves the network success rate and performance of domestic users and business travel users in overseas scenarios;

5) After the client upgraded to Cronet, the overall end-to-end time of the user side was reduced by 18% based on the above optimization items;

Trip.com's exploration of QUIC will continue. We will closely follow community dynamics, explore support for more QUIC application scenarios, tap into more optimization projects, and contribute to Ctrip's internationalization strategy.

<<: 

>>:  Multicast Protocol: The "Group Chat Master" of the Internet World

Recommend

Common network problem location tools that programmers should master

In the process of daily project operation and mai...

In the digital age, how should enterprises achieve excellent digital experience?

[51CTO.com original article] Driven by mainstream...

5G technology has already approached the Shannon limit, what else can 6G do?

On May 17, China Unicom and ZTE Corporation signe...

80VPS: 350 yuan/month Korean server 2*E5-2450L/8GB/1TB/10M CN2/support upgrade

A few days ago, we shared the promotional VPS inf...

The ultimate secret to speeding up WiFi is here!

The previous two WeChat articles "Your offic...

IPC Streaming Media Transport Protocol (Part 2) - SRT

1. The past and present of SRT SRT is the acronym...