This article is reprinted from the WeChat public account "Backend Technology Compass", the author is Compass Krypton Entrance. Please contact the Backend Technology Compass public account for reprinting this article. 1. Big White and Little Black Life is not just about the immediate, but also about poetry and the fields in the distance. A new week has begun. Dabai and Xiaohei are colleagues. They usually drink, eat meat and play games together. Of course, they sometimes discuss academics and cutting-edge technologies. No, Xiao Hei heard about something new, and started chatting with Da Bai: Xiaohei: Dabai Dabai, I heard that the HTTP protocol has reached 3.0? Dabai: Yes, it has reached 3.0, and I even want to tell you that it is still developed based on UDP! Xiao Hei: UDP? Are you sure?! UDP is an unreliable spokesperson. Isn't TCP popular anymore? Dabai: It is absolutely true, and it is already running and the results are good. It is being promoted. It is said that the Chrome Canary version already supports it, so you can be the first to try it. Xiao Hei: Oh! I haven’t even figured out HTTP 2.0 yet, and 3.0 is already here. Come on, tell me about this black technology. Xiao Hei is a straightforward person. He promised Dabai that if he explained everything clearly, he would treat him to a barbecue at the log cabin on Friday and have a few drinks to relax.
Seeing Xiao Hei's thirst for knowledge and for the sake of barbecue, Da Bai decided to tell Xiao Hei about HTTP3.0 and QUIC protocols. In this article you will learn the following:
2.HTTP2.0 and HTTP3.0 Technology never stops. We all know that businesses on the Internet are constantly evolving, and the same is true for important network protocols like HTTP, where new versions are a reflection of old ones. 2.1 The love-hate relationship between HTTP2.0 and TCP HTTP2.0 was launched in 2015 and is still relatively young. Its important optimizations such as binary framing protocol, multiplexing, header compression, and server push have truly taken the HTTP protocol to a new level. Important companies like Google are not satisfied with this and want to continue to improve the performance of HTTP to get the ultimate experience with the least time and resources. So we must ask, although HTTP2.0 has good performance, what are the shortcomings?
Students who are familiar with the HTTP2.0 protocol should know that these shortcomings are basically caused by the TCP protocol. Water can carry a boat but can also capsize it. In fact, TCP is also innocent! In our eyes, TCP is a connection-oriented, reliable transport layer protocol. Currently, almost all important protocols and applications are implemented based on TCP. The network environment is changing very quickly, but the TCP protocol is relatively slow. It is this contradiction that prompted Google to make a seemingly unexpected decision - to develop a new generation of HTTP protocol based on UDP. 2.2 Why Google chose UDP As mentioned above, Google's choice of UDP seems unexpected, but it actually makes sense if you think about it carefully. Let's simply look at the shortcomings of the TCP protocol and some of the advantages of UDP:
From the above comparison, we can see that it is not easy for Google to transform and upgrade TCP. Although UDP does not have the problems caused by TCP in order to ensure reliable connections, UDP itself is unreliable and cannot be used directly. To sum up, it is natural for Google to decide to transform a new protocol based on UDP that has the advantages of the TCP protocol. This new protocol is the QUIC protocol. 2.3 QUIC Protocol and HTTP3.0 QUIC is actually the abbreviation of Quick UDP Internet Connections, which literally means fast UDP Internet connection. Let's take a look at some of Wikipedia's introduction to the QUIC protocol: The QUIC protocol was originally designed by Jim Roskind at Google, implemented and deployed in 2012, and was publicly announced and described to the IETF in 2013 as experiments expanded. QUIC improves the performance of connection-oriented web applications that are currently using TCP. It does this by establishing multiple, multiplexed connections using the User Datagram Protocol (UDP) between two endpoints. Secondary goals of QUIC include reducing connection and transmission latency, performing bandwidth estimation in each direction to avoid congestion. It also moves the congestion control algorithm to user space instead of kernel space, and extends it with forward error correction (FEC) to further improve performance in the presence of errors. HTTP3.0 is also known as HTTP Over QUIC. It abandons the TCP protocol and instead uses the QUIC protocol based on the UDP protocol. 3. Detailed explanation of QUIC protocol Choose the good and follow it, and change the bad. Since HTTP3.0 has chosen the QUIC protocol, it means that HTTP3.0 basically inherits the powerful functions of HTTP2.0, and further solves some problems existing in HTTP2.0, while inevitably introducing new problems. The QUIC protocol must implement the important functions of HTTP2.0 on the TCP protocol and solve the legacy problems. Let's take a look at how QUIC is implemented. 3.1 Head-of-line blocking problem Head-of-line blocking (abbreviated as HOL blocking) is a performance-limiting phenomenon in computer networks. In layman's terms, one data packet affects a bunch of data packets, and no one can leave without it. The head-of-line blocking problem may exist at the HTTP layer and the TCP layer. In HTTP1.x, this problem exists at both levels. The multiplexing mechanism of the HTTP2.0 protocol solves the head-of-line blocking problem at the HTTP layer, but the head-of-line blocking problem still exists at the TCP layer. After the TCP protocol receives a data packet, this part of the data may arrive out of order, but TCP must collect, sort and integrate all the data before using it in the upper layer. If one of the packets is lost, it must wait for retransmission, resulting in a lost packet blocking the use of data in the entire connection. The QUIC protocol is implemented based on the UDP protocol. There can be multiple streams on a link. The streams do not affect each other. When a stream loses packets, the impact is very small, thus solving the head-of-line blocking problem. 3.2 0RTT Link Building A commonly used indicator for measuring network link establishment is RTT Round-Trip Time, which is the time it takes for a data packet to go back and forth. RTT consists of three parts: round-trip propagation delay, queuing delay within network devices, and application data processing delay. Generally speaking, the HTTPS protocol needs to establish a complete link including TCP handshake and TLS handshake, which requires at least 2-3 RTTs in total. The ordinary HTTP protocol also requires at least 1 RTT to complete the handshake. However, the QUIC protocol can be implemented to include valid application data in the first packet, thereby achieving 0RTT, but this is also conditional. Simply put, HTTP2.0 based on TCP and TLS protocols needs some time to complete handshake and encryption negotiation before actually sending data packets, and only after completion can business data be actually transmitted. However, QUIC can send business data in the first data packet, which has a great advantage in connection delay and can save hundreds of milliseconds. QUIC's 0RTT also requires conditions. It is impossible for a client and server to interact for the first time with 0RTT, after all, the two parties are completely strangers. Therefore, the QUIC protocol can be divided into two cases for discussion: the first connection and the non-first connection. 3.3 First connection and non-first connection Clients and servers using the QUIC protocol must use 1RTT for key exchange, and the exchange algorithm used is the DH (Diffie-Hellman) algorithm. The DH algorithm opens up a new idea for key exchange. The RSA algorithm mentioned in the previous article is also implemented based on this idea. However, the key exchange of the DH algorithm and RSA is not exactly the same. Interested readers can look at the mathematical principles of the DH algorithm. The DH algorithm opens up a new idea for key exchange. The RSA algorithm mentioned in the previous article is also implemented based on this idea. However, the key exchange of the DH algorithm and RSA is not exactly the same. Interested readers can look at the mathematical principles of the DH algorithm. 3.3.1 First connection In short, the key negotiation and data transmission process between the client and the server during the first connection involves the basic process of the DH algorithm: 3.3.2 Non-first connection As mentioned earlier, when the client and server connect for the first time, the server passes a config package, which contains the server public key and two random numbers. The client will store the config and use it directly when connecting later, thus skipping the 1RTT and achieving 0RTT business data interaction. The client saves the config for a time limit, and the key exchange at the first connection is still required after the config expires. 3.4 Forward security issues Forward security is a professional term in the field of cryptography. See the explanation on Baidu: Forward security or forward secrecy is a security property of a communication protocol in cryptography, which means that the leakage of a long-term master key will not lead to the leakage of past session keys. Forward security protects past communications from the threat of passwords or keys being exposed in the future. If a system has forward security, it can ensure the security of historical communications when the master key is leaked, even if the system is actively attacked. In layman's terms, forward security means that even if the key is leaked, the previously encrypted data will not be leaked. It only affects the current data and has no effect on previous data. As mentioned earlier, the QUIC protocol generates two encryption keys when it is first connected. Since the config is stored by the client, if the server private key is leaked during this period, the key K can be calculated based on K = mod p. If this key is always used for encryption and decryption, then K can be used to decrypt all historical messages. Therefore, a new key is generated later and used for encryption and decryption. It is destroyed when the interaction is completed, thus achieving forward security. 3.5 Forward Error Correction Forward error correction is a term in the field of communications. Here is the explanation from Encyclopedia: Forward error correction, also known as forward error correction code Forward Error Correction (FEC) is a method to increase the credibility of data communication. In a one-way communication channel, once an error is discovered, its receiver will no longer have the right to request transmission. FEC is a method of transmitting redundant information with data, allowing the receiver to reconstruct the data when errors occur during transmission. Listening to this description is for verification, let's see how the QUIC protocol is implemented: Each time QUIC sends a group of data, it performs an XOR operation on the data and sends the result as an FEC packet. After receiving the data, the receiver can perform verification and error correction based on the data packet and FEC packet. 3.6 Connection Migration Network switching happens almost all the time. The TCP protocol uses a five-tuple to represent a unique connection. When we switch from a 4G environment to a WiFi environment, the IP address of the mobile phone will change. At this time, a new TCP connection must be created to continue transmitting data. The QUIC protocol is based on UDP and abandons the concept of quintuples. It uses a 64-bit random number as the connection ID and uses the ID to represent the connection. Based on the QUIC protocol, we will not reconnect when switching between wifi and 4G in daily life, or switching between different base stations, thereby improving the service layer experience. 4. Application and Prospects of QUIC From the previous introduction, we can see that although the QUIC protocol is implemented based on UDP, it implements and optimizes the important functions of TCP, otherwise users will not buy it. The core idea of the QUIC protocol is to transfer the functions implemented in the kernel of the TCP protocol, such as reliable transmission, flow control, congestion control, etc., to the user state for implementation. At the same time, attempts in the direction of encrypted transmission have also promoted the development of TLS1.3. However, the TCP protocol is too powerful, and many network devices even have many unfriendly strategies for UDP data packets, intercepting them and causing a decrease in the successful connection rate. The leading company Google has made many attempts in its own products, and the domestic company Tencent has also made many attempts on the QUIC protocol. Among them, Tencent Cloud showed great interest in the QUIC protocol, made some optimizations, and then conducted experiments on connection migration, QUIC success rate, time consumption in weak network environments, etc. in some key products, providing a lot of valuable data from the production environment. Let’s take a look at the time consumption distribution of a group of Tencent Cloud requests under different packet loss rates in mobile Internet scenarios: It takes time to promote any new thing. The popularity of HTTP2.0 and HTTPS protocols, which have been around for many years, is not as high as expected, and the same is true for IPv6. However, QUIC has shown strong vitality. Let us wait and see! 5. Conclusion The network protocol itself is very complex. This article can only give a rough explanation of the important parts from the overall perspective. If you are interested in a certain point, you can refer to the relevant code and RFC documents. We may have encountered this interview question before: How to use UDP protocol to implement the main functions of TCP protocol. I did encounter this question in a written test, and it was really frustrating because the topic was too grand. But now let's take a look at the QUIC protocol to answer this question: based on the UDP body, the important functions of TCP are transferred to the user space to implement, thereby bypassing the kernel to implement the user-mode TCP protocol, but the actual implementation is still very complicated. |
<<: 5G+Wi-Fi 6 accelerates the Internet of Everything
>>: Better connections enable faster, more flexible networks
10gbiz has released a current promotion, with 40%...
When the Internet began to be widely used in the ...
IMIDC is a local operator in Hong Kong. The busin...
ZJI has released a special promotional dedicated ...
HostDare launched a promotion for the CKVM series...
Using WiFi to surf the Internet has become an ind...
While 5G is still positioned as a “near-term” gam...
More than two years after the licenses were issue...
The new generation of intelligent technologies re...
Recently, a piece of news about "Luoyang Uni...
In recent years, Wi-Fi specifications have been i...
Wall Street analysts at Deutsche Bank's resea...
This article will introduce the basic concepts, a...
[[269430]] Is edge computing the first new trend ...
[[374308]] This article is reprinted from the WeC...