I’ve explained the QUIC protocol in ten minutes. Do you understand it?

I’ve explained the QUIC protocol in ten minutes. Do you understand it?

Let's review the development of HTTP. First, we wanted a protocol that could obtain document content on the Internet through a method called GET request. Later, this GET request was written into the official document, and HTTP/1.0 came into being. The emergence of HTTP/1.0 can be said to be subversive. Some of the standards it covers are still in use today, such as HTTP header and the concept of protocol number. However, this version of HTTP still has some obvious defects, such as it does not support persistent connections. After each request response, the connection needs to be disconnected, which is very inefficient. Not long after, the HTTP/1.1 standard was formulated. This standard is the most frequently used standard on the Internet. HTTP/1.1 solves the previous defect of not supporting persistent connections, and HTTP/1.1 also adds cache and control modules.

However, even though HTTP/1.1 solves some of the connection performance issues, its efficiency is still not very high, and HTTP also has a head-of-line blocking problem (I have explained and introduced the head-of-line blocking in the article about HTTP2.0.)

If five requests are sent at the same time, if the first request is not processed, the subsequent requests will not be processed, as shown in the following figure

If the first request is not processed, then the four requests 2, 3, 4, and 5 will be directly blocked on the client side, and can only be sent one by one after request 1 is processed. When the network is unobstructed, the performance is not greatly affected. However, once request 1 does not reach the server for some reason, or the request is not returned in time due to network congestion, all subsequent requests will be affected, causing subsequent requests to be blocked indefinitely, and the problem becomes more serious.

Although HTTP/1.1 uses pipling to solve the head-of-line blocking problem, in the pipling design, each request is still sent first and returned first in order, which does not fundamentally solve the problem. With the continuous update of the protocol, HTTP/2.0 was proposed.

HTTP/2.0

HTTP/2.0 solves the head-of-line blocking problem by using stream and framing.

HTTP/2.0 divides a TCP connection into multiple streams. Each stream has its own stream id. This stream can be sent from the client to the server or from the server to the client.

HTTP/2.0 can also split the information to be transmitted into frames and encode them in binary format. In other words, HTTP/2.0 will split the header and data separately, and the binary format after splitting is located in multiple streams. Let's take a look at the picture below.

As you can see, HTTP/2.0 uses these two mechanisms to divide multiple requests into different streams, and then frames the requests for binary transmission. Each stream can be sent out of order without guaranteeing the order. After reaching the client, the client will reassemble the streams and can prioritize which stream to process based on priority.

QUIC Protocol

Although HTTP/2.0 solves the head-of-line blocking problem, each HTTP connection is established and transmitted by TCP, and the TCP protocol has strict order requirements when processing packets. This means that when a stream segmented by a packet is lost for some reason, the server will not process other streams, but will wait for the client to send the lost stream first. For example, if there is a request with three streams, and stream2 is lost for some reason, the processing of stream1 and stream2 will also be blocked. Only after receiving the retransmitted stream2, the server will process it again.

This is the crux of TCP connections.

In view of this problem, let's put TCP aside for now and get to know the QUIC protocol first.

The lowercase of QUIC is quic, which is a homophone of quick, meaning fast. It is a UDP-based transmission protocol proposed by Google, so QUIC is also called Quick UDP Internet Connection.

First of all, the first feature of QUIC is that it is fast. Why is it fast? Where is it fast?

As we all know, the HTTP protocol uses TCP for message transmission at the transport layer, and HTTPS and HTTP/2.0 also use the TLS protocol for encryption, which will cause a connection delay in the three-way handshake: TCP three-way handshake (once) and TLS handshake (twice), as shown in the figure below.

For many short connection scenarios, this handshake delay has a significant impact and cannot be eliminated.

In contrast, QUIC's handshake connection is faster because it uses UDP as the transport layer protocol, which can reduce the time delay of the three-way handshake. In addition, QUIC's encryption protocol uses the latest version of the TLS protocol, TLS 1.3. Compared with the previous TLS 1.1-1.2, TLS1.3 allows the client to send application data without waiting for the TLS handshake to complete, and can support 1 RTT and 0 RTT, thereby achieving the effect of quickly establishing a connection.

We also said above that although HTTP/2.0 solves the head-of-line blocking problem, the connection it establishes is still based on TCP and cannot solve the request blocking problem.

UDP itself does not have the concept of establishing a connection, and the streams used by QUIC are isolated from each other and will not block the processing of other stream data, so using UDP will not cause head-of-line blocking.

In TCP, in order to ensure the reliability of data, TCP uses the sequence number + confirmation number mechanism. Once a packet with a synchronize sequence number is sent to the server, the server will respond within a certain period of time. If there is no response after this period of time, the client will retransmit the packet until the server receives the data packet and responds.

So how does TCP determine its retransmission timeout?

TCP generally uses an adaptive retransmission algorithm, and the timeout period is dynamically adjusted according to the round-trip time (RTT). Each time, the client uses the same syn to determine the timeout period, resulting in inaccurate calculation of the RTT result.

Although QUIC does not use the TCP protocol, it also guarantees reliability. The mechanism by which QUIC achieves reliability is the use of Packet Number, which can be considered a substitute for the synchronize sequence number. This sequence number is also incremental. Unlike syn, the Packet Number will be + 1 regardless of whether the server receives the data packet, while syn will only be + 1 after the server sends an ack response.

For example, if a data packet with PN = 10 fails to reach the server for some reason during the sending process, the client will retransmit a data packet with PN = 11. After a period of time, the client receives a response with PN = 10 and then sends back a response message. The RTT at this time is the survival time of the data packet with PN = 10 in the network. This calculation is relatively accurate.

Although QUIC guarantees the reliability of data packets, how is the reliability of data guaranteed?

QUIC introduces the concept of stream offset. A stream can transmit multiple stream offsets. Each stream offset is actually data identified by a PN. Even if the data identified by a PN is lost, after PN + 1, it will still retransmit the data identified by the PN. When all the data identified by the PN is sent to the server, it will be reassembled to ensure data reliability. The stream offsets that arrive at the server will be assembled in order, which also ensures the order of the data.

As we all know, the specific implementation of the TCP protocol is completed by the operating system kernel. Applications can only use it and cannot modify the kernel. As more and more mobile devices are connected to the Internet, performance has gradually become a very important metric. Although mobile networks are developing very fast, the update of the user side is very slow. I still see that many computers in many regions still use the XP system, although it has been developed for many years. The server system does not rely on user upgrades, but because the operating system upgrade involves the update of the underlying software and runtime library, it is also conservative and slow.

An important feature of the QUIC protocol is its pluggability and ability to be dynamically updated and upgraded. QUIC implements a congestion control algorithm at the application layer and does not require support from the operating system or kernel. When the congestion control algorithm is switched, it only needs to be reloaded on the server without the need for downtime or restart.

We know that TCP flow control is implemented through sliding windows. If you are not familiar with sliding windows, you can read this article I wrote.

TCP Basics

Some concepts of sliding windows are mentioned later in the article.

QUIC also implements flow control, and QUIC's flow control also uses window updates window_update to tell the other end the number of bytes it can accept.

The TCP protocol header is not encrypted and authenticated, so it is likely to be tampered with during transmission. In contrast, the message headers in QUIC are authenticated and the messages are encrypted. In this way, as long as there is any modification to the QUIC message, the receiving end can detect it in time, ensuring security.

In general, QUIC has the following advantages over HTTP/2.0:

  • Using the UDP protocol does not require three connections for handshake, and it also shortens the time it takes to establish a TLS connection.
  • Solved the head-of-line blocking problem
  • It is dynamically pluggable and implements congestion control algorithm at the application layer, which can be switched at any time.
  • The message header and message body are authenticated and encrypted respectively to ensure security.
  • Connections can be migrated smoothly

Smooth connection migration means that your mobile phone or mobile device will switch between 4G signals and WiFi and other networks without disconnection and reconnection. The user will not even be aware of it and can directly achieve smooth signal switching.

QUIC related information

The QUIC protocol is relatively complex, and it is difficult for me to fully implement it myself.

If readers are interested, they can first take a look at what open source implementations are available.

1)Chromium: https://github.com/hanpfei/chromium-net

This is officially supported. It has many advantages. Google officially maintains it without any pitfalls. You can update to the latest version at any time with Chrome. However, compiling Chromium is more troublesome, as it has a separate set of compilation tools. It is not recommended to consider this solution for the time being.

2) proto-quic: https://github.com/google/proto-quic

A QUIC protocol part stripped from Chromium, but its github homepage has announced that it is no longer supported and is only used for experiments. It is not recommended to consider this solution.

3)goquic: https://github.com/devsisters/goquic

goquic encapsulates libquic's Go language encapsulation, and libquic is also separated from chromium and has not been maintained for several years. It only supports quic-36. goquic provides a reverse proxy. Testing found that the latest chrome browser can no longer support it because the QUIC version is too low. It is not recommended to consider this solution.

4)quic-go: https://github.com/lucas-clemente/quic-go

quic-go is a QUIC protocol stack written entirely in go. It is under active development and has been used in Caddy. It is MIT licensed and is currently a better solution.

Then, for small and medium-sized teams or individual developers, the recommended solution is the last one, that is, to use caddy https://github.com/caddyserver/caddy/wiki/QUIC to deploy and implement QUIC. The caddy project was not originally intended to implement QUIC, but to implement a signature-free HTTPS web server (caddy will automatically renew the certificate). QUIC is just a subsidiary function of it (but the reality is that more people seem to use it to implement QUIC).

From the technical trend of Github, there are more and more open source resources about QUIC. If you are interested, you can study them one by one: https://github.com/search?q=quic

<<:  An article to show you how to use Nginx as a proxy for WebSocket

>>:  Wu Jiangxing, Academician of the Chinese Academy of Engineering: Opening up a new 6G paradigm for multi-objective sustainable and coordinated development and building a new foundation for intelligent network

Recommend

Rethinking data center cabling practices to improve energy efficiency

According to a study by researchers from the U.S....

What is Wi-Fi and why is it so important?

The ubiquitous wireless technology Wi-Fi has beco...

...

Hostodo: $24.99/year-2GB/20G NVMe/5TB/Las Vegas, Spokane, and Miami data centers

Hostodo has launched a February Special Deal prom...

Let’s talk about PHY register, do you know it?

[[383774]] In the previous article, we explained ...

ZJI: New Hong Kong (Ali/Kwaiwan) E3 high-frequency servers available, 25% off

ZJI is the original well-known WordPress host com...

COVID accelerates interest in 5G, digital transformation, and IoT

[[409518]] The COVID-19 pandemic has accelerated ...

Six steps to prepare for a 5G IoT future

Gartner predicts that by 2023, there will be 49 m...