Graphical explanation | A brief history of what is HTTP

Graphical explanation | A brief history of what is HTTP

[[344212]]

This article is reprinted from the WeChat public account "Backend Technology Compass", and the author is Backend Technology Compass. Please contact the Backend Technology Compass public account to reprint this article.

Today, let's study some things about the Http protocol. Through this article, you will learn the following:

  • Comparison and advantages and disadvantages of various versions of HTTP protocol
  • The basic principles of Http2.0 protocol, such as SPDY protocol, binary framing protocol, multiplexing, header compression, and service push
  • HTTP3.0 and QUIC Protocol

Let's ride the wind and waves to the ocean of knowledge. It's time to set sail!

1. Comparison of HTTP protocol versions

The Http Hypertext Transfer Protocol is like air. You can't feel its existence but it is everywhere. The author extracted some simple information about the development of the Http protocol from Wikipedia. Let's take a look:

The Hypertext Transfer Protocol is an application protocol for distributed collaborative hypermedia information systems. The Hypertext Transfer Protocol is the basis for data communications on the World Wide Web, where hypertext documents include hyperlinks to other resources that users can easily access.

Tim Berners-Lee initiated the development of the Hypertext Transfer Protocol at CERN in 1989. The development of the early Hypertext Transfer Protocol Requests for Comments (RFCs) was a joint effort of the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with the work later transferred to the IETF.

Introduction to Tim Berners-Lee, the Father of the World Wide Web

Tim Berners-Lee is a British engineer and computer scientist, best known as the inventor of the World Wide Web. He is a professor of computer science at Oxford University and a professor at MIT.

He proposed an information management system on March 12, 1989, and then realized the first successful communication between a Hypertext Transfer Protocol HTTP client and a server through the Internet in mid-November of the same year.

He is the head of the World Wide Web Consortium (W3C), which oversees the continued development of the Web. He is also the founder of the World Wide Web Foundation. He is also the 3Com Founding Chairman and Senior Fellow at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). He is also the Director of the Web Science Research Initiative (WSRI) and a member of the Advisory Board of the MIT Center for Collective Intelligence. He is also the founder and president of the Open Data Institute and is currently an advisor to the social network MeWe.

In 2004, Berners-Lee was knighted by Queen Elizabeth II for his groundbreaking work. In April 2009, he was elected as a foreign fellow of the National Academy of Sciences of the United States, listed in Time magazine's list of the 100 most important people of the 20th century, and was hailed as the "inventor of the World Wide Web" and won the 2016 Turing Award.

[[344213]]

Basic information about each version of http

After more than 20 years of evolution, the HTTP protocol has five major versions: 0.9, 1.0, 1.1, 2.0, and 3.0. The author drew a picture for you to see:

A.Http0.9 version

0.9 is the original version, and its main features include:

  • Limited request method support

Only GET request method is supported, and other request methods are not supported. Therefore, the amount of information transmitted from the client to the server is very limited, that is, the commonly used Post request cannot be used

  • Request header is not supported

The version number cannot be specified in the request, and the server can only return an HTML string.

  • Respond and close

After the server responds, the TCP connection is closed immediately

B.Http1.0 version

Version 1.0 is mainly an enhancement of version 0.9, and the effect is quite obvious. The main features and shortcomings include:

  • Rich request methods

New request methods include POST, DELETE, PUT, and HEADER, which increases the amount of information sent from the client to the server.

  • Add request and response headers

The concept of request header and response header has been added. The HTTP protocol version number and other header information can be specified in the communication, making C/S interaction more flexible and convenient.

  • Enriching data transmission content

The transmission content format has been expanded to include: pictures, audio and video resources, binary, etc. can all be transmitted. Compared with 0.9, which can only transmit HTML content, HTTP has more application scenarios

  • Poor link reusability

In version 1.0, each TCP connection can only send one request. Once the data is sent, the connection is closed. If you want to request other resources, you must re-establish the connection. In order to ensure correctness and reliability, TCP requires the client and server to have three handshakes and four handshakes. Therefore, the cost of establishing a connection is very high. The sending rate is slow at the beginning of congestion control, so the performance of version 1.0 is not ideal.

  • Disadvantages of stateless and connectionless

Version 1.0 is stateless and connectionless. In other words, the server does not track or record the status of requests. The client needs to establish a TCP connection for each request and cannot reuse it. In addition, 1.0 stipulates that the next request can only be sent after the response to the previous request arrives. If the previous request is blocked, the subsequent request will be blocked. Packet loss and disorder problems and the high cost of the connection process cause many problems with multiplexing and head-of-line blocking, so connectionlessness and statelessness are a weak point of version 1.0.

C.Http1.1 version

Version 1.1 was released about a year after version 1.0. It is an optimization and improvement of version 1.0. The main features of version 1.1 include:

  • Adding long connections

A new Connection field is added, and the keep-alive value can be set to keep the connection open. That is, the TCP connection is not closed by default and can be reused by multiple requests. This is also a very important optimization in version 1.1. However, the S-side server will only respond to the next response after processing one response. If the previous response is particularly slow, there will be many requests waiting in line, and there will still be a head-of-line blocking problem.

  • Pipelining

Based on the long connection, pipelining can continue to send subsequent requests without waiting for the response to the first request, but the order of responses is still returned in the order of requests. That is, in the same TCP connection, the client can send multiple requests at the same time, further improving the transmission efficiency of the HTTP protocol.

  • More request methods

Added PUT, PATCH, OPTIONS, DELETE and other request methods.

  • Host Field

The Host field is used to specify the domain name of the server, so that multiple requests can be sent to different websites on the same server, improving the reuse of the machine, which is also an important optimization.

D.Http2.0 version

Version 2.0 is a milestone version. Compared with version 1.x, it has many optimizations to adapt to the current network scenarios. Some important features include:

  • Binary format

1.x is a text protocol, but 2.0 is based on binary frames as the basic unit. It can be said to be a binary protocol. All transmitted information is divided into messages and frames, and encoded in binary format. One frame contains data and identifiers, making network transmission efficient and flexible.

  • Multiplexing

This is a very important improvement. In 1.x, there were problems with the cost and efficiency of establishing multiple connections. In version 2.0, multiple requests share one connection through multiplexing. Multiple requests can be concurrently connected on one TCP connection, mainly distinguishing them through the identifiers in the binary frames to achieve link multiplexing.

  • Header Compression

Version 2.0 uses the HPACK algorithm to compress header data, thereby reducing the size of the request and improving efficiency. This is very easy to understand. Previously, the same header had to be sent every time, which seemed redundant. Version 2.0 incrementally updates the header information, effectively reducing the transmission of header data.

  • Server Push

This feature is quite interesting. In previous 1.x versions, the server executed passively after receiving the request. In version 2.0, the server is allowed to actively send resources to the client, which can accelerate the client.

2. Detailed explanation of Http2.0

We have compared the evolution and optimization processes of several versions. Next, we will take a deep look at some of the features of version 2.0 and its basic implementation principles.

In comparison, version 2.0 is not an optimization of version 1.1 but an innovation, because 2.0 carries more performance target tasks. Although 1.1 adds long connections and pipelining, it does not fundamentally achieve true high performance.

The design goal of 2.0 is to provide users with a faster, simpler, and safer experience while being compatible with 1.x semantics and operations, and to efficiently utilize the current network bandwidth. To this end, 2.0 has made many adjustments, mainly including: binary framing, multiplexing, header compression, etc.

Akamai compared the loading effects of http2.0 and http1.1 (the loading time of 379 small fragments on my computer was 0.99s VS 5.80s in the experiment):

https://http2.akamai.com/demo

2.1 SPDY Protocol

To talk about the 2.0 version standard and new features, we must mention Google's SPDY protocol. Take a look at Baidu Encyclopedia:

SPDY is a TCP-based session layer protocol developed by Google to minimize network latency, increase network speed, and optimize the user's network experience. SPDY is not a protocol to replace HTTP, but an enhancement of the HTTP protocol.

The new protocol features include data stream multiplexing, request prioritization, and HTTP header compression. Google said that after the introduction of the SPDY protocol, page loading speeds in lab tests were 64% faster than before.

Subsequently, the SPDY protocol was supported by major browsers such as Chrome and Firefox, and deployed on some large and small websites. This efficient protocol attracted the attention of the HTTP working group, and the official Http2.0 standard was formulated on this basis.

In the following years, SPDY and Http2.0 continued to evolve and promote each other. Http2.0 allowed servers, browsers, and website developers to have a better experience with the new protocol and was quickly recognized by the public.

2.2 Binary Framing Layer

The binary framing layer redesigns the encoding mechanism without changing the request method and semantics. The figure shows the http2.0 layered structure (picture from reference 4):

The binary encoding mechanism enables communication to take place over a single TCP connection that remains active for the duration of the conversation.

The binary protocol breaks down the communication data into smaller frames. The data frames are filled in the bidirectional data flow between the client and the server, just like a two-way multi-lane highway with constant flow of traffic:

To understand the binary framing layer, you need to know four concepts:

Link

It refers to a TCP link between C/S, which is a basic link data highway

Stream

A bidirectional byte stream within an established TCP connection. A TCP link can carry one or more messages.

Message

A message belongs to a data stream. A message is a complete series of frames corresponding to a logical request or response message. That is, frames constitute a message.

Frame

A frame is the smallest unit of communication. Each frame contains a frame header and a message body, which identifies the data stream to which the current frame belongs.

The four are one-to-many inclusion relationships. The author drew a picture:

Let's take a look at the structure of the HeadersFrame header frame:

Let's take a look at the structure of the HeadersFrame header frame: from each field, you can see the length, type, flag, stream identifier, data payload, etc. If you are interested, you can read the relevant rfc7540 documents.

  1. https://httpwg.org/specs/rfc7540.html

In short, version 2.0 breaks down communication data into binary coded frames for exchange. Each frame corresponds to a specific message in a specific data stream. All frames and streams are multiplexed within a TCP connection. The binary framing protocol is an important foundation for other functions and performance optimizations of 2.0.

2.3 Multiplexing

There is a head-of-line blocking problem in version 1.1. Therefore, if the client wants to initiate multiple parallel requests to improve performance, it must use multiple TCP connections, which will incur greater delays and link establishment and teardown costs, and cannot effectively utilize TCP links.

The use of a new binary framing protocol in version 2.0 breaks through many limitations of version 1.0 and fundamentally achieves true request and response multiplexing.

The client and server break down the interactive data into independent frames, transmit them interleavedly without affecting each other, and finally reassemble them at the other end based on the stream identifier in the frame header, thereby achieving multiplexing of the TCP link.

The figure shows the frame-based message communication process of version 2.0 (picture from reference 4):


2.4 Header Compression

A.Header redundant transmission

We all know that HTTP requests have a header part. Each packet has one and most packets have the same header part for a link. In this case, it is really a waste to transmit the same part every time.

In the modern network, each web page contains an average of more than 100 http requests, and each request header has an average of 300-500 bytes, with a total data volume of more than tens of KB. This may cause data delays, especially in complex WiFi environments or cellular networks. In this case, you can only see the phone spinning in circles, but there is usually almost no change between these request headers. It is indeed not an efficient approach to transmit the same data part multiple times in an already crowded link.

The congestion control designed based on TCP has the AIMD characteristic. If packet loss occurs, the transmission rate will drop significantly. In a crowded network environment, a large packet header means that the low-speed transmission caused by congestion control will be aggravated.

B.Http compression and criminal attacks

Before the 2.0 version of the HPACK algorithm, http compression used gzip. The later proposed SPDY algorithm made special designs for Headers, but it still used the DEFLATE algorithm.

In some subsequent practical applications, it was found that both DEFLATE and SPDY are vulnerable to attacks. Because the DEFLATE algorithm uses backward string matching and dynamic Huffman coding, attackers can control part of the request header by modifying the request part and then see how much the size changes after compression. If it becomes smaller, the attacker knows that the injected text is repeated in some content of the request.

This process is a bit like the elimination process of Tetris. After a period of attempts, the data content may be completely figured out. Due to the existence of this risk, safer compression algorithms have been developed.

C.HPACK algorithm

In version 2.0, the HPACK algorithm uses a header table in the C/S to store previously sent key-value pairs. For common key-value pairs that hardly change during the same data communication, they only need to be sent once.

In extreme cases, if the request header does not change each time, the header is not included in the transmission, that is, the header overhead is zero bytes. If the header key-value pair changes, only the changed data needs to be sent, and the newly added or modified header frame will be appended to the header table. The header table always exists during the life of the connection and is updated and maintained by the client and server.

Simply put, the client and server jointly maintain a key-value structure. When changes occur, they are updated and transmitted, otherwise they are not transmitted. This is equivalent to the initial full transmission followed by incremental updates and transmissions. This idea is also very common in daily development, so don't think too much about it.

The figure shows the update process of the header table (picture from reference 4):

Related documents of hpack algorithm:

https://tools.ietf.org/html/draft-ietf-httpbis-header-compression-12

2.5 Server Push

Server push is a powerful new feature added in version 2.0. Different from the general question-and-answer C/S interaction, in push-based interaction, the server can send multiple responses to a client's request. In addition to the response to the initial request, it also pushes additional resources to the client without the client's explicit request.

For example:

Imagine that you go to a restaurant to eat. A fast food restaurant with good service will provide you with napkins, chopsticks, spoons and even seasonings after you order a bowl of beef noodles. Such proactive service saves guests’ time and improves the dining experience.

This method of actively pushing additional resources is very effective in actual C/S interactions, because almost every network application contains multiple resources, and the client needs to obtain them all one by one. At this time, if the server pushes these resources in advance, it can effectively reduce the additional delay time, because the server can know what resources the client will request next.

The following figure shows the simple process of server push (picture from reference 4):

3.HTTP2.0 and HTTP3.0

Technology never stops.

We all know that businesses on the Internet are constantly evolving, and the same is true for important network protocols like HTTP, where new versions are a reflection of old ones.

3.1 The love-hate relationship between HTTP2.0 and TCP

HTTP2.0 was launched in 2015 and is still relatively young. Its important optimizations such as binary framing protocol, multiplexing, header compression, and server push have truly taken the HTTP protocol to a new level.

Important companies like Google are not satisfied with this and want to continue to improve the performance of HTTP to get the ultimate experience with the least time and resources.

So we must ask, although HTTP2.0 has good performance, what are the shortcomings?

  • It takes a long time to establish a connection (essentially a TCP problem)
  • Head-of-line blocking problem
  • Poor performance in the mobile Internet sector (weak network environment)
  • ......

Students who are familiar with the HTTP2.0 protocol should know that these shortcomings are basically caused by the TCP protocol. Water can carry a boat but can also capsize it. In fact, TCP is also innocent!

In our eyes, TCP is a connection-oriented, reliable transport layer protocol. Currently, almost all important protocols and applications are implemented based on TCP.

The network environment is changing very quickly, but the TCP protocol is relatively slow. It is this contradiction that prompted Google to make a seemingly unexpected decision - to develop a new generation of HTTP protocol based on UDP.

3.2 Why Google chose UDP

As mentioned above, Google's choice of UDP seems unexpected, but it actually makes sense if you think about it carefully.

Let's simply look at the shortcomings of the TCP protocol and some of the advantages of UDP:

  • There are many devices and protocols developed based on TCP, making compatibility difficult
  • The TCP protocol stack is an important part of Linux, and the cost of modification and upgrade is very high.
  • UDP itself is connectionless, with no link establishment and teardown costs.
  • UDP packets have no head-of-line blocking problem
  • UDP transformation cost is low

From the above comparison, we can see that it is not easy for Google to transform and upgrade TCP. Although UDP does not have the problems caused by TCP in order to ensure reliable connections, UDP itself is unreliable and cannot be used directly.

To sum up, it is natural for Google to decide to transform a new protocol based on UDP that has the advantages of the TCP protocol. This new protocol is the QUIC protocol.

3.3 QUIC Protocol and HTTP3.0

QUIC is actually the abbreviation of Quick UDP Internet Connections, which literally means fast UDP Internet connection.

Let's take a look at some of Wikipedia's introduction to the QUIC protocol:

The QUIC protocol was originally designed by Jim Roskind at Google, implemented and deployed in 2012, and was publicly announced and described to the IETF in 2013 as experiments expanded.

QUIC improves the performance of connection-oriented web applications that are currently using TCP. It does this by establishing multiple, multiplexed connections using the User Datagram Protocol (UDP) between two endpoints.

Secondary goals of QUIC include reducing connection and transmission latency, performing bandwidth estimation in each direction to avoid congestion. It also moves the congestion control algorithm to user space instead of kernel space, and extends it with forward error correction (FEC) to further improve performance in the presence of errors.

HTTP3.0 is also known as HTTP Over QUIC. It abandons the TCP protocol and instead uses the QUIC protocol based on the UDP protocol.

4. Detailed explanation of QUIC protocol

Choose the good and follow it, and change the bad.

Since HTTP3.0 has chosen the QUIC protocol, it means that HTTP3.0 basically inherits the powerful functions of HTTP2.0, and further solves some problems existing in HTTP2.0, while inevitably introducing new problems.

The QUIC protocol must implement the important functions of HTTP2.0 on the TCP protocol and solve the legacy problems. Let's take a look at how QUIC is implemented.

4.1 Head-of-line blocking problem

Head-of-line blocking (abbreviated as HOL blocking) is a performance-limiting phenomenon in computer networks. In layman's terms, one data packet affects a bunch of data packets, and no one can leave without it.

The head-of-line blocking problem may exist at the HTTP layer and the TCP layer. In HTTP1.x, this problem exists at both levels.

The multiplexing mechanism of the HTTP2.0 protocol solves the head-of-line blocking problem at the HTTP layer, but the head-of-line blocking problem still exists at the TCP layer.

After the TCP protocol receives a data packet, this part of the data may arrive out of order, but TCP must collect, sort and integrate all the data before using it in the upper layer. If one of the packets is lost, it must wait for retransmission, resulting in a lost packet blocking the use of data in the entire connection.

The QUIC protocol is implemented based on the UDP protocol. There can be multiple streams on a link. The streams do not affect each other. When a stream loses packets, the impact is very small, thus solving the head-of-line blocking problem.

4.2 0RTT Link Building

A commonly used indicator for measuring network link establishment is RTT Round-Trip Time, which is the time it takes for a data packet to go back and forth.

RTT consists of three parts: round-trip propagation delay, queuing delay within network devices, and application data processing delay.

Generally speaking, the HTTPS protocol needs to establish a complete link including TCP handshake and TLS handshake, which requires at least 2-3 RTTs in total. The ordinary HTTP protocol also requires at least 1 RTT to complete the handshake.

However, the QUIC protocol can be implemented to include valid application data in the first packet, thereby achieving 0RTT, but this is also conditional.

Simply put, HTTP2.0 based on TCP and TLS protocols needs some time to complete handshake and encryption negotiation before actually sending data packets, and only after completion can business data be actually transmitted.

However, QUIC can send business data in the first data packet, which has a great advantage in connection delay and can save hundreds of milliseconds.

QUIC's 0RTT also requires conditions. It is impossible for a client and server to interact for the first time with 0RTT, after all, the two parties are completely strangers.

Therefore, the QUIC protocol can be divided into two cases for discussion: the first connection and the non-first connection.

4.3 First connection and non-first connection

Clients and servers using the QUIC protocol must use 1RTT for key exchange, and the exchange algorithm used is the DH (Diffie-Hellman) algorithm.

The DH algorithm opens up a new idea for key exchange. The RSA algorithm mentioned in the previous article is also implemented based on this idea. However, the key exchange of the DH algorithm and RSA is not exactly the same. Interested readers can look at the mathematical principles of the DH algorithm.

The DH algorithm opens up a new idea for key exchange. The RSA algorithm mentioned in the previous article is also implemented based on this idea. However, the key exchange of the DH algorithm and RSA is not exactly the same. Interested readers can look at the mathematical principles of the DH algorithm.

4.3.1 First connection

In short, the key negotiation and data transmission process between the client and the server during the first connection involves the basic process of the DH algorithm:

4.3.2 Non-first connection

As mentioned earlier, when the client and server connect for the first time, the server passes a config package, which contains the server public key and two random numbers. The client will store the config and use it directly when connecting later, thus skipping the 1RTT and achieving 0RTT business data interaction.

The client saves the config for a time limit, and the key exchange at the first connection is still required after the config expires.

4.4 Forward security issues

Forward security is a professional term in the field of cryptography. See the explanation on Baidu:

Forward security or forward secrecy is a security property of a communication protocol in cryptography, which means that the leakage of a long-term master key will not lead to the leakage of past session keys.

Forward security protects past communications from the threat of passwords or keys being exposed in the future. If a system has forward security, it can ensure the security of historical communications when the master key is leaked, even if the system is actively attacked.

In layman's terms, forward security means that even if the key is leaked, the previously encrypted data will not be leaked. It only affects the current data and has no effect on previous data.

As mentioned earlier, the QUIC protocol generates two encryption keys when it is first connected. Since the config is stored by the client, if the server private key is leaked during this period, the key K can be calculated based on K = mod p.

If this key is always used for encryption and decryption, then K can be used to decrypt all historical messages. Therefore, a new key is generated later and used for encryption and decryption. It is destroyed when the interaction is completed, thus achieving forward security.

4.5 Forward Error Correction

Forward error correction is a term in the field of communications. Here is the explanation from Encyclopedia:

Forward error correction, also known as forward error correction code Forward Error Correction (FEC) is a method to increase the credibility of data communication. In a one-way communication channel, once an error is discovered, its receiver will no longer have the right to request transmission.

FEC is a method of transmitting redundant information with data, allowing the receiver to reconstruct the data when errors occur during transmission.

Listening to this description is for verification, let's see how the QUIC protocol is implemented:

Each time QUIC sends a group of data, it performs an XOR operation on the data and sends the result as an FEC packet. After receiving the data, the receiver can perform verification and error correction based on the data packet and FEC packet.

4.6 Connection Migration

Network switching happens almost all the time.

The TCP protocol uses a five-tuple to represent a unique connection. When we switch from a 4G environment to a WiFi environment, the IP address of the mobile phone will change. At this time, a new TCP connection must be created to continue transmitting data.

The QUIC protocol is based on UDP and abandons the concept of quintuples. It uses a 64-bit random number as the connection ID and uses the ID to represent the connection.

Based on the QUIC protocol, we will not reconnect when switching between wifi and 4G in daily life, or switching between different base stations, thereby improving the service layer experience.

5. Application and Prospects of QUIC

From the previous introduction, we can see that although the QUIC protocol is implemented based on UDP, it implements and optimizes the important functions of TCP, otherwise users will not buy it.

The core idea of ​​the QUIC protocol is to transfer the functions implemented in the kernel of the TCP protocol, such as reliable transmission, flow control, congestion control, etc., to the user state for implementation. At the same time, attempts in the direction of encrypted transmission have also promoted the development of TLS1.3.

However, the TCP protocol is too powerful, and many network devices even have many unfriendly strategies for UDP data packets, intercepting them and causing a decrease in the successful connection rate.

The leading company Google has made many attempts in its own products, and the domestic company Tencent has also made many attempts on the QUIC protocol.

Among them, Tencent Cloud showed great interest in the QUIC protocol, made some optimizations, and then conducted experiments on connection migration, QUIC success rate, time consumption in weak network environments, etc. in some key products, providing a lot of valuable data from the production environment.

Let’s take a look at the time consumption distribution of a group of Tencent Cloud requests under different packet loss rates in mobile Internet scenarios:

It takes time to promote any new thing. The popularity of HTTP2.0 and HTTPS protocols, which have been around for many years, is not as high as expected, and the same is true for IPv6. However, QUIC has shown strong vitality. Let us wait and see!

6. Conclusion

This article introduces the historical evolution of the HTTP protocol, the main features and advantages and disadvantages of each version, and focuses on some features of the HTTP 2.0 protocol, including: SPDY protocol, binary framing protocol, multiplexing, header compression, server push and other important functions. Due to limited space, I cannot expand on it too much.

Although the http2.0 version of the protocol has many excellent features and was officially released in 2015, and some major manufacturers at home and abroad now basically use http2.0 to handle some requests, it is still not widely popular.

Currently, the http3.0 version was launched in 2018. As for the promotion and popularization of http2.0 and http3.0, it will take time, but we firmly believe that our network can be safer, faster and more economical.

But now let’s take a look at the QUIC protocol: based on the UDP body, the important functions of TCP are transferred to the user space to implement, thereby bypassing the kernel to implement the user-mode TCP protocol, but the actual implementation is still very complicated.

The network protocol itself is very complex. This article can only give a rough explanation of the important parts from the overall perspective. If you are interested in a certain point, you can refer to the relevant code and RFC documents.

<<:  4G networks are getting slower and slower? This may be true, but there is absolutely no conspiracy!

>>:  The US court's suspension of the TikTok ban will take effect this Sunday

Recommend

CrownCloud: Los Angeles AMD Ryzen KVM special price starts at $30 per year

In April this year, CrownCloud launched the AMD R...

9 classic cases, online teaching how to troubleshoot network failures

Network failure is the most common and difficult ...

UK hints Huawei 5G ban may be overturned in the future

[[334143]] This article is reproduced from Leipho...

The truth about 5G speed, is your 5G package worth it?

[[326825]] We'll cover the different 5G speed...

Wi-Fi Alliance: Wi-Fi 6E is the most significant upgrade in 20 years

With the rapid development of mobile devices, the...

5G empowers the IoT ecosystem, and co-creation of solutions is the key

[[406782]] In addition to vendors working more cl...

Learn the network protocol stack in ARM-uboot from scratch

[[401440]] Network protocol stack in uboot The ne...

...

How Apple's iCloud Private Relay powers enterprise VPNs

Apple's iCloud Private Relay service offers p...