Http protocol is easy to use for programmers with rich development experience. Although it is seen every day, many people may not be able to answer questions about http protocol well without active preparation.
Today, let's talk about the differences between http 2.0 and previous versions. 1. HTTP Definition HTTP (HyperText Transfer Protocol) is a transmission protocol used to transmit hypertext from a WWW server to a local browser. 2. History of HTTP Development HTTP 2.0 vs 1.0 Performance The emergence of HTTP 2.0 has greatly improved web performance compared to HTTP 1.x. This is an official presentation created by Akamai to illustrate the significant performance improvement of HTTP/2 compared to the previous HTTP/1.1. 379 images were requested at the same time, and the speed advantage of HTTP/2 can be seen from the comparison of Load time. IV. Differences between HTTP 2.0 and 1.1 Later we will talk about the differences between HTTP 2.0 and HTTP1.1 from several aspects and explain to you the principles behind them. Difference 1: Multiplexing Multiplexing allows a single HTTP/2 connection to initiate multiple request-response messages simultaneously. Here is an example: The first request of the whole access process is the index.html page, and then the browser will request the style.css and scripts.js files. The left figure shows that the two files are loaded sequentially, and the right figure shows that the two files are loaded in parallel. We know that HTTP relies on TCP protocol at the bottom layer, but the question is how can two requests and responses be sent simultaneously in the same connection? First of all, you need to know that a TCP connection is equivalent to two pipes (one for server to client, and one for client to server). Data transmission in the pipes is via bytecode, and the transmission is ordered, with each byte being transmitted one by one. For example, if the client wants to send two words, Hello and World, to the server, it can only send Hello first and then World, and cannot send these two words at the same time. Otherwise, the server may receive HWeolrllod (note that they are sent interspersed, but the order is not messed up). In this way, the server will be confused. Next, can we send the words Hello and World at the same time? Of course, we can. We can split the data into packets and label each packet. When sending, it is like this: ①H ②W ①e ②o ①l ②r ①l ②l ①o ②d. When it reaches the server, the server distinguishes the two words according to the labels. The actual sending effect is as shown below: To achieve the above effect, we introduce a new concept: binary framing. The binary framing layer is between the application layer (HTTP/2) and the transport layer (TCP or UDP). HTTP/2 does not modify the TCP protocol but makes use of TCP's features as much as possible. In the binary framing layer, HTTP/2 divides all transmitted information into frames and encodes them in binary format, where the header information is encapsulated in the HEADER frame and the corresponding Request Body is encapsulated in the DATA frame. The key to HTTP performance optimization is not high bandwidth, but low latency. TCP connections "tune" themselves over time, initially limiting the maximum speed of the connection, and increasing the speed of the transmission over time if the data is successfully transmitted. This tuning is called TCP slow start. For this reason, HTTP connections, which are already bursty and short-lived, become very inefficient. HTTP/2 can use TCP connections more efficiently by allowing all data streams to share the same connection, allowing high bandwidth to truly serve HTTP performance improvements. Through the following two pictures, we can have a deeper understanding of multiplexing: HTTP/1 HTTP/2 To summarize: Multiplexing technology: single connection and multiple resources, reducing the link pressure on the server, taking up less memory, and increasing the connection throughput; by reducing the TCP slow start time, the transmission speed is increased. Difference 2: Header Compression Why compression? In HTTP/1, HTTP requests and responses are composed of three parts: "status line, request/response header, and message body". Generally speaking, the message body will be compressed with gzip, or the compressed binary file (such as pictures and audio) will be transmitted, but the status line and header are not compressed and are transmitted directly in plain text. As Web functions become more and more complex, the number of requests generated by each page is also increasing, resulting in more and more traffic consumed in the header, especially the need to transmit UserAgent, Cookies and other content that does not change frequently every time, which is a complete waste. Let's explain the principle of compression in layman's terms. Header compression needs to be between the browser and the server that supports HTTP/2.
Static dictionaries have two functions:
The static dictionary in HTTP/2 is as follows (only part of it is taken below, the full table is here): At the same time, both the browser and the server can add key-value pairs to the dynamic dictionary, and then the key-value pair can be represented by a character. It should be noted that the dynamic dictionary is context-dependent and a different dictionary needs to be maintained for each HTTP/2 connection. During the transmission process, using characters instead of key-value pairs greatly reduces the amount of data transmitted. Difference 3: HTTP2 supports server push Server push is a mechanism for sending data before the client requests it. Modern web pages use many resources: HTML, style sheets, scripts, images, and so on. In HTTP/1.x, each of these resources must be explicitly requested. This can be a slow process. The browser starts by fetching the HTML, and then incrementally fetches more resources as it parses and evaluates the page. Because the server must wait for the browser to make each request, the network is often idle and underutilized. To improve latency, HTTP/2 introduces server push, which allows the server to push resources to the browser before the browser explicitly requests them. A server often knows that a page will require many additional resources, and can start pushing those resources when it responds to the browser's first request. This allows the server to fully utilize a potentially idle network, improving page load times. |
<<: Can 6G technology directly skip 5G technology?
[51CTO.com original article] At 9:00 am on May 22...
[[400629]] Recently, 5G has become a hot topic on...
1. Analysis of ARP attack principles 1. What is A...
With the rapid development of information technol...
[51CTO.com original article] [Beijing, China, Jul...
[Original article from 51CTO.com] From the evolut...
With the network reconstruction of operators, NB-...
In April this year, CrownCloud launched the AMD R...
A new report released by IndexBox: "World – ...
In recent times, 5G has become popular in the cir...
Young man, you are reading a short hardcore scien...
DMIT has released the latest special package for ...
JuHost is a foreign hosting service provider esta...
LOCVPS (Global Cloud) is an early established Chi...
At the 2018 China IT Operation and Maintenance Co...