HTTP pipelining is a technique for sending multiple HTTP requests in batches without waiting for the server's response. Request result pipelining makes the loading time of HTML web pages increase dramatically, especially in specific high-latency connection environments, such as satellite Internet access. In broadband connections, the acceleration is not so significant, because the server side needs to apply the HTTP/1.1 protocol, and the server side must reply to requests in the order of the client's requests, so that the entire connection is still first-in-first-out, and head-of-line blocking (HOL blocking) may occur, causing delays. Future asynchronous operations in HTTP/2.0 or SPDY will solve this problem. Because it may fill multiple HTTP requests in one TCP packet, HTTP pipelining requires fewer TCP packets to be transmitted on the network, reducing the network load. The pipelining mechanism must be completed through a persistent connection, and only requests such as GET and HEAD can be pipelined. Non-idempotent methods such as POST will not be pipelined. Consecutive GET and HEAD requests can always be pipelined. Whether a series of idempotent requests, such as GET, HEAD, PUT, DELETE, can be pipelined depends on whether the series of requests depends on others. In addition, the pipelining mechanism should not be activated when the connection is first established, because the other party (server) may not support the HTTP/1.1 version of the protocol. HTTP pipelining relies on support from both the client and the server. Servers that comply with HTTP/1.1 support pipelining. This does not mean that the server is required to provide pipelined responses, but only that it should not fail when receiving pipelined requests. What is http pipeliningUsually, http requests are always sent sequentially, and the next request will only be sent when the response to the current request is fully received. Due to network latency and bandwidth limitations, this will cause a large delay in the server sending the next response. HTTP/1.1 allows multiple http requests to be output simultaneously through a socket without waiting for the corresponding response. The requester will then wait for their respective responses, which arrive in the order of the previous requests. (Me: All requests maintain a FIFO queue. After a request is sent, there is no need to wait for the response of this request to be received. The next request can be issued again; at the same time, the server returns the responses to these requests in FIFO order.) Pipelining can greatly improve the speed of page loading, especially in high-latency connections. Pipelining can also reduce the size of TCP/IP packets. Usually the MSS size is between 536-1460 bytes, so it is possible to put many http requests in one TCP/IP packet. Reducing the number of packets required to load a web page can benefit the network as a whole, because fewer packets means less burden on routers and the network. HTTP/1.1 requires the server to support pipelining as well. This does not mean that the server needs to pipeline responses, but that the server will not fail to respond when a client issues a pipelined request. This obviously has the potential to introduce a new category of evangelism bugs, since only modern browsers support pipelining. When should we pipeline requests?Only idempotent requests (see Note 1) can be pipelined, such as GET and HEAD. POST and PUT should not be pipelined. We should also not issue pipelined requests when establishing a new connection, because we cannot be sure whether the origin service or proxy supports HTTP/1.1. Therefore, pipeline can only use existing keep-alive connections. How many requests should be pipelinedIf the connection is closed too early, it is not worthwhile to pipeline many requests, because we will spend a lot of time writing the request to the network, and then have to rewrite it on the new connection. Moreover, if the earlier request takes a long time to complete, a long pipeline will actually cause the user to perceive longer delays. The HTTP/1.1 standard does not provide any guidance on the ideal number of pipelined requests. In fact, we recommend no more than 2 keep-alive connections per server. Obviously, this depends on the application. For the reasons mentioned above, browsers may not need a particularly long pipeline. 2 may be a good value, but it remains to be tested. What happens if a request is cancelled?If a request is canceled, does that mean the entire pipeline is canceled? Or does that mean the response to the canceled request should simply be discarded so that other requests in the pipeline are not forced to be resent? The answer depends on many factors, including how much of the response to the canceled request has not yet been received. The most naive approach might be to simply cancel the pipeline and resend all requests. This is only possible if the requests are idempotent. This naive approach can also have a positive impact, because the requests being sent in the pipeline may belong to the same page load group that is being canceled. What happens if the connection fails?If the connection fails or the server is interrupted while downloading a piped response, the browser must be able to restart the request that was lost. This situation is equivalent to the cancellation example discussed above. Note
explain
The article comes from: Front-end Restaurant. If you wish to reprint this article, please contact the Front-end Restaurant ReTech Toutiao account. github: https://github.com/zuopf769 |
<<: Smart Manufacturing: Ensuring a Smart Future for Manufacturing
>>: 36.2%! H3C leads the Chinese campus switch market
The press conference on the major project "N...
On the evening of May 9, ZTE issued an announceme...
RackNerd has launched various AMD Ryzen series da...
According to the established plan, the three majo...
Recently, we have received product promotion info...
AP is the most commonly used device for building ...
This year marks the 20th anniversary of Z-Wave be...
Web-based instant messaging The server can immedi...
Xi'an's health code crashed twice in a ro...
A400 Interconnect recently released a back-to-sch...
Regarding the technical solutions for future comm...
With the changes in traffic flows used in modern ...
If a cabling project is to be successful, you fir...
RepriseHosting is a long-established hosting comp...
iWebFusion is a site under the old foreign host c...