What has changed since HTTP/1.1 was invented?In recent years, if we carefully observe the resources that need to be downloaded by the homepages of the most popular websites, we will find a very obvious trend:
Since HTTP/1.1 was released in 1997, we have been using HTTP/1.x for quite some time, but the explosive growth of content in recent years has made HTTP/1.1 increasingly unable to meet the needs of the modern network. HTTP/1.1 protocol performance flaws
What's new with Http/2HTTP/1.1 compatibleThe purpose of HTTP/2 is to improve the performance of HTTP. One very important aspect of protocol upgrade is compatibility with the old version of the protocol, otherwise it will be very difficult to promote the new protocol. How does HTTP/2 do this? HTTP/2 does not introduce a new protocol name in the URI. It still uses "http://" to represent the plain text protocol and "https://" to represent the encrypted protocol. Therefore, it only requires the browser and server to automatically upgrade the protocol behind the scenes. This allows users to be unaware of the protocol upgrade, thus achieving a smooth upgrade of the protocol. It is still based on TCP protocol transmission. In order to maintain functional compatibility, the application layer is completely consistent with HTTP/1.1. For example, the rules of request method, status code, header field, etc. remain unchanged. Binary transferHTTP/2 is no longer a plain text message like HTTP/1.1, but adopts a binary format. Both the header information and the data body are binary and are collectively referred to as frames: Headers Frame and Data Frame. This is very computer-friendly because after receiving the message, the computer no longer needs to convert the plaintext message into binary, but can directly parse the binary message, which increases the efficiency of data transmission. Header Compression (HPACK)HTTP protocol does not have state, and all information must be attached to each request. Therefore, many fields of the request are repeated, such as Cookie and User Agent. The same content must be attached to each request, which wastes a lot of bandwidth and affects the speed. HTTP/2 optimizes this by introducing a header compression mechanism. On the one hand, the header information is compressed using gzip or compress before being sent; on the other hand, the client and server simultaneously maintain a header information table, in which all fields are stored and an index number is generated. In the future, the same field will not be sent, only the index number will be sent, which increases the speed. HPACK algorithm: A header information table is maintained on both the client and the server. All fields are stored in this table and an index number is generated. The same field will not be sent in the future, only the index number will be sent, which will increase the speed. MultiplexingMultiplexing means that multiple streams can exist in one TCP connection. In other words, multiple requests can be sent, and the other end can know which request it belongs to through the identifier in the frame. This feature greatly improves HTTP transmission performance, mainly in the following three aspects: Multiplexing TCP connections HTTP/2 reuses TCP connections. In one connection, both the client and the browser can send multiple requests or responses at the same time, and they do not need to correspond one by one in sequence, thus avoiding "head of line blocking" Data Flow HTTP/2 sends multiple requests/responses in parallel and interleaves them without affecting each other. HTTP/2 calls all packets of each request or response a data stream. Each data stream has a unique number. When a data packet is sent, it must be marked with a data stream ID to distinguish which data stream it belongs to. It also stipulates that the ID of the data stream sent by the client is always an odd number, and the ID of the data stream sent by the server is an even number. When the data stream is halfway through being sent, both the client and the server can send a signal (RST_STREAM frame) to cancel the data stream. The only way to cancel a data stream in version 1.1 is to close the TCP connection. This means that HTTP/2 can cancel a request while ensuring that the TCP connection is still open and can be used by other requests. Priority HTTP/2 can also set different priorities for each Stream. The "flag bit" in the frame header can set the priority. For example, when the client accesses HTML/CSS and image resources, it hopes that the server will deliver HTML/CSS first and then the image. This can be achieved by setting the priority of the Stream, thereby improving the user experience. Server PushHTTP/1.1 does not support the server actively pushing resources to the client. The client can only obtain the resources responded by the server after initiating a request to the server. For example, the client obtains an HTML file from the server through an HTTP/1.1 request, and HTML may also need to rely on CSS to render the page. At this time, the client must initiate another request to obtain the CSS file, which requires two message round trips, as shown in the left part of the following figure: As shown in the right part of the above figure, in HTTP/2, when the client accesses HTML, the server can directly and actively push the CSS file, reducing the number of message transmissions. Improved safety For compatibility reasons, HTTP/2 continues the "plain text" feature of HTTP/1. It can transmit data in plain text as before, and does not force the use of encrypted communication. However, HTTPS is already a general trend, and all major browsers support encrypted HTTP/2. Therefore, HTTP/2 in real applications is still encrypted: HTTP/2 Legacy IssuesHTTP/2 also has head-of-line blocking HTTP/2 solves the head-of-line blocking problem of HTTP/1 through the concurrency of Stream. It seems perfect, but HTTP/2 still has the problem of "head-of-line blocking", but the problem is not at the HTTP level, but at the TCP level. If head-of-line blocking occurs, the problem may be more serious than http1.1, because there is only one TCP connection and subsequent transmissions have to wait for the previous one. HTTP/1.1 has multiple TCP connections, and if one is blocked, the others can still run normally. |
<<: The 5G Revolution: Unveiling the Future of Healthcare
>>: How often does an Ethernet cable lose signal?
RAKsmart is a foreign hosting company founded by ...
Suddenly, 5G has truly come into our lives. With ...
spinservers is a business that mainly provides ov...
As we stand on the precipice of a new era in digi...
With the official release of 5G commercial photog...
How does wireless charging technology work? Befor...
Speaking of 5G, what do you think of first? If yo...
OneTechCloud (Yikeyun) offers a 20% discount code...
The epoch-making 5G technology, in addition to a ...
[[432985]] This article is reprinted from the WeC...
Have you adapted to your daily work and life afte...
As of July 2020, the number of users allocated IP...
HostYun recently launched a VPS with AMD 5950X+Sa...
HostDare has launched this month's promotion,...
Driven by the wave of informatization, Ulanqab Ci...