HTTP/3 has reached another milestone: Recently, Cloudflare officially announced that it has fully provided QUIC and HTTP/3 support on its edge network. So what changes and advantages can HTTP/3 bring? For Internet users, and to interact efficiently with sites through browsers and other clients. You can use the latest Chrome Canary browser to interact with the server using the HTTP/3 UDB protocol. For those who use command-line clients, the latest version of curl also provides support for HTTP/3. In this article, Chongchong will introduce the development of HTTP/3, as well as how users can enable HTTP 3 and how to use HTTP 3 through the browser Chrome and the command-line client curl.
HTTP Development History First, let’s take a look at the development of HTTP over the years to better understand HTTP/3. HTTP/1.0 The HTTP protocol originated in 1996, when the HTTP/1.0 specification (0.x version ignored) was released. This specification defines the basic HTTP text specification definition that we are familiar with today. In HTTP/1.0, it is defined that each request/response exchange between the client and the server must create a new TCP connection, so each request requires the well-known "three-way handshake, four-way wave" process, so the request will inevitably be delayed. For example, a typical HTTP/TLS process is illustrated as follows: Moreover, in order to avoid flooding the network with unprocessable data packets, the TCP protocol uses a warm-up pause period called "slow start" for the established connection to allow the TCP congestion control algorithm to determine the amount of data that can be transmitted, rather than sending all outstanding data as soon as possible after the connection is established. Since each new connection must go through this slow startup process, it has become a bottleneck for network performance. HTTP/1.1 keep-alive The subsequent HTTP/1.1 version introduced the "keep-alive" connection method to solve these problems. Through the keep-alive technology, the client can reuse the TCP connection without re-establishing the TCP connection every time, thereby solving the problems of initial connection establishment and slow connection. But this does not substantially solve the problem. Although multiple requests can share the same connection, they must still be serialized one after another, so the client and server can only perform one request/response exchange for each connection at any given time. With the development of network and web technology, the resources (CSS, JS scripts, pictures, videos, etc.) required by each website have increased, and the browser has an increasingly urgent need for concurrency when acquiring and rendering web pages. However, since HTTP/1.1 only allows the client to perform one HTTP request/response exchange at a time, the only way to obtain concurrency at the network layer is to use multiple TCP connections in parallel, which does not allow the benefits of keep-alive technology to be enjoyed. HTTP/2 SPDY More than a decade later, SPDY and then HTTP/2 specifications emerged. It first introduced the concept of HTTP streams. By abstracting HTTP implementations and multiplexing different HTTP exchanges concurrently onto the same TCP connection, browsers can reuse TCP connections more efficiently. HTTP/2 solves the problem of low efficiency of single TCP connection, and can transmit multiple requests/responses simultaneously through the same connection. However, if data packet loss occurs during transmission, even if the lost data only involves a single request, all requests and responses will also be affected by the packet loss and need to be retransmitted. Because although HTTP/2 can isolate different HTTP exchanges on different streams, the underlying TCP cannot distinguish between them. All TCP can see is a byte stream without any mark. The role of TCP is to deliver the entire stream of bytes from one endpoint to another in the correct order. When a TCP packet carrying some bytes gets lost on the network path, it creates a gap in the stream and TCP needs to fill it by resending the affected packets when the loss is detected. This way, the successfully transmitted packets after the lost packet cannot be delivered to the application layer, even if they are not lost afterwards and belong to completely independent HTTP requests. Hence, they end up incurring unnecessary delays as well. This problem is known as TCP head-of-line blocking. In order to solve the head-of-line blocking problem, HTTP/2 also introduces multiplexing technology, which divides the data that can be transmitted by the TCP stream into several messages, and each message is further divided into the smallest binary frame. In this way, even if one request is blocked, it will not affect other requests, as shown in the fourth case in the above figure. HTTP/3 QUIC Of course, these solutions to improve TCP can only partially solve the problem. In order to completely solve the problem from the root, it is necessary to completely replace the underlying TCP protocol. This is the UDP-based QUIC protocol that Google has explored for many years, which is also the basis of HTTP/3. In the QUIC protocol, the data stream is used as the basis at the transport layer. QUIC streams share the same QUIC connection, and additional handshakes and slow starts are required to create new QUIC streams. The independent delivery of QUIC streams is achieved by using the UDP protocol at the bottom layer and encapsulating the QUIC data packets on top of the UDP datagram. Therefore, in most cases, packet loss that affects one stream will not affect other streams. Using UDP provides greater flexibility than TCP and allows the QUIC implementation to exist entirely in user space. Updates to the protocol implementation no longer rely on operating system updates. With QUIC, HTTP-level streams can be simply mapped to the headers of the QUIC stream, inheriting all the benefits of HTTP/2 without the head-of-line blocking problem. QUIC also combines the typical 3-way TCP handshake with the TLS 1.3 handshake. This provides encryption and authentication by default and speeds up connection establishment. Even if the initial request in an HTTP session requires a new QUIC connection, the latency incurred before data starts flowing is low. Use of HTTP/3 HTTP/3 and QUIC bring us groundbreaking changes, which can fundamentally solve many problems and defects of HTTP standards for a long time. So how can we use the benefits it brings immediately? Quiche Framework To support the promotion of HTTP/3, Cloudflare developed and open-sourced an HTTP/3 and QUI application framework using Rust, and also gave the application a very appetizing name and logo, quiche, in an effort to attract people to try the food made with HTTP/3 as soon as possible. The source code of quiche is hosted on github (github:/cloudflare/quiche). After cloning the source code, you can compile it through cargo (note that rust 1.38 and newer versions, BoringSSL and its windows version NASM are required):
Quiche also provides a Docker-based experimental environment including http3-client, http3-server, client and server. The usage is as follows: Docker compilation:
Making HTTP/3 requests
Website launch Currently, Cloudflare's selectively open customers can enable HTTP/3 through simple manual settings. The method is to manually turn on the switch in the "Network" tab of the Cloudflare dashboard: Client Usage Currently, well-known browsers Google Chrome and Firefox have already experimentally provided support for HTTP/3. Chrome is in Canary, and Firefox will officially provide support in Nightly. Chrome browser: First you need to download and install the latest Canary version. Then, start Chrome Canary by setting the following command line parameters:
HTTP/3 is then supported, and the protocol version used can be checked via the Network tab in Chrome developer tools: Note that the protocol type is "http2+quic/99", which means Http3. Using curl The latest version of curl, 7.66, also adds experimental support for HTTP/3. We can download and compile it for trial use, as described in the previous article. To use HTTP/3 you need to use the newly added "--http3" flag to make your request:
|
<<: Six common IoT wireless technologies and their use cases
>>: A heated debate among various parties: How far are we from a 5G hit?
[[248660]] Tim Berners-Lee Beijing time, November...
Riverbed Technology, the application performance ...
The global software-defined networking market has...
In 2019, the global market for SD-WAN grew by 70%...
On October 20, Live Video Stackcon 2017 was held ...
[[180050]] Telecoms.com has teamed up with mobile...
[[348473]] This is definitely not good news for A...
On October 31, 2019, the three major operators an...
South Korean mobile operators SK Telecom, KT and ...
[51CTO.com original article] Today's encrypti...
In November 2022, the "China Enterprise &quo...
The arrival of 5G has brought with it an unpreced...
After the rapid development in 2020, 2021 is a cr...
In recent years, with the rise of mobile communic...
On December 26, the National Industrial and Infor...