Today, the Internet is very prosperous, and mobile Internet is even more aggressive, but without the iteration of Internet protocols, there would be no Internet today. Let us take a look at these protocols. When the Internet began to become widely used in the 1990s, most network traffic used just a handful of protocols: IPv4 routed packets of data, TCP translated those packets into connections, SSL/TLS encrypted those connections, DNS named hosts for connections, and HTTP was the application protocol that often used it.
These core Internet protocols have changed minimally over the years, while HTTP has only added a few new headers and methods, TLS has had some minor modifications, TCP has adapted congestion control, and DNS has introduced features like DNSSEC, the protocols themselves have looked the same for a long time (except for the IPv6 protocol which has been getting a lot of attention in the network operator community). As a result, network operators, providers, and policymakers who wish to understand (and sometimes control) the Internet have adopted a number of practices based on the “footprint” coverage of these protocols—whether for debugging problems, improving service quality, or imposing policy. Major changes are now being made to the core Internet protocols. While the intent is to be compatible with the Internet (as they would not be accepted), they are likely to break protocols that are not unauthorized (the current protocols). Why the Internet needs to change There are many factors driving these changes. First, the limitations of the core Internet protocols have become apparent, especially in terms of performance. Due to structural issues in applications and transport protocols, the network is not being used efficiently, resulting in end users seeing substandard performance (particularly network latency). This means that there is a strong incentive to evolve or replace these protocols, as there is a lot of experience showing that even small performance gains can make a difference. Second, the ability to evolve Internet protocols has become increasingly difficult over time, primarily because of unintended uses of these networks. For example, HTTP proxies that attempt to compress responses make it more difficult to deploy new compression techniques; TCP optimizations in middleware make it more difficult to deploy TCP improvements. The use of encryption on the Internet has increased due to the Edward Snowden PRISM leak in 2015. This is actually a separate discussion, but closely related to this is that encryption is one of the best tools we have to ensure that protocols can evolve. Let’s understand what happened, what’s next, how it affects the network, and how the network affects protocol design. HTTP/2 HTTP/2 (Google's TCP-based application layer protocol SPDY) was the first notable change in standardization in 2015, which multiplexes multiple requests onto a single TCP connection, thus avoiding queuing requests on the client side without blocking each other. It is now widely deployed and supported by all major browsers and web servers. From a networking perspective, there are some notable changes in HTTP/2. First, it is a binary protocol, so any device that assumes it is HTTP/1.1 will break. This kind of disruption and breakage is one of the main reasons for another major change to HTTP/2. It actually requires encryption. This gives it a better chance of being protected from man-in-the-middle attacks that were assumed to work for HTTP/1.1, or something more subtle like stripping headers or blocking new protocol extensions. HTTP/2 also requires the use of TLS/1.2 when encrypting, and blacklists cipher suites that are judged to be insecure, with the effect of only allowing the use of ephemeral keys. See the TLS 1.3 section for potential impacts. Finally, HTTP/2 allows requests from multiple hosts to be coalesced onto a single connection, improving performance by reducing the number of connections used for page loads (thus reducing congestion control scenarios). For example, you can establish a connection for www.example.com, but also use it for requests to images.example.com. And future extensions to the protocol may also allow other hosts to be added to a connection, even if they were not used in the original TLS certificate it was used for. Therefore, the assumption that traffic on a connection is limited to the purpose for which it was initiated does not apply. Despite these changes, it’s worth noting that HTTP/2 doesn’t appear to be suffering from significant interoperability issues or interference from the web. TLS 1.3 TLS 1.3 has just completed the final process of standardization and is already supported by some implementations. Don't be fooled by the name; this is actually a new version of TLS with several modifications to the handshake protocol to allow application data to flow from the beginning (often referred to as "0RTT"). The new design relies on ephemeral key exchanges, which rules out static keys. This has caused concern among some network operators and vendors - especially those who need to understand what's happening inside those connections. For example, a bank considers adopting a service provided by a data center with regulatory requirements for visibility. By sniffing traffic in the network and decrypting it using the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it is from an attacker from the outside or an employee leaking data from the inside. TLS 1.3 does not support specific techniques for intercepting traffic, as it is also a form of ephemeral key protection against attacks. However, this puts these network operators in an awkward position due to regulatory requirements for both using modern encryption protocols and monitoring their networks. There has been much debate about whether regulations require static keys, whether there are other approaches that might be just as effective, and whether weakening the security of the entire Internet for the sake of a relatively small number of networks is the right solution. It is still possible to decrypt traffic in TLS 1.3, but users need access to ephemeral keys to do so, and they are not persistent by design. At this point, it doesn’t look like TLS 1.3 will change to accommodate these networks, but there are some rumors about creating another protocol that would allow third parties to observe these use cases and more. Whether this gains traction remains to be seen. QUIC In the work of HTTP/2, TCP has similar inefficiencies that are apparent. Since TCP is an in-order transport protocol, the loss of one packet may prevent the data in the buffer behind it from being delivered to the application. For multiplexed protocols, this can make a big difference in performance. QUIC attempts to solve this problem by effectively rebuilding TCP semantics (and some of HTTP/2's streaming model) on top of UDP. Like HTTP/2, it is a result of Google's efforts and is now being adopted by the Internet Engineering Task Force (IETF), with the initial use case being HTTP-over-UDP, with the goal of becoming a standard by the end of 2018. However, thanks to Google's use of Chrome and its website, it now accounts for more than 7% of Internet traffic. DOH The latest change is DOH (DNS over HTTP). A lot of research has shown that networks often use DNS as a means to impose policy (whether on behalf of network operators or larger authorities). The use of encryption to limit this control has been discussed above, but it has the disadvantage (at least from some perspectives) that it is possible to distinguish it from other traffic; for example, by blocking access using its port number. DOH solves this problem by piggybacking DNS traffic onto an existing HTTP connection, thus eliminating any discriminator. A network that wishes to block access to the DNS resolver can only do so by blocking access to the website. Network and Users Beyond a desire to avoid ossification, these changes reflect the web’s changing relationship with its users, which has long been viewed as benevolent, or at least altruistic, but is no longer so, thanks to pervasive surveillance and attacks like Firesheep. As a result, there is a growing tension between the overall needs of Internet users and the networks that want to take a certain amount of data traffic. Particularly affected are those networks that want to impose policies on these users, such as corporate networks. In some cases, they can achieve their goals by installing software (or CA certificates or browser extensions) on the user's computer. However, this is not easy in situations where the network does not own or have access to the computer. For example, BYOD has become common and IoT devices rarely have proper control interfaces. As a result, a lot of the discussion around IETF protocol development touches on the competing needs of enterprises and other networks, as well as the benefits of the Internet as a whole. Get Involved In the long run, the Internet needs to provide value to end users, avoid rigidity, and keep the network running smoothly. The changes happening now need to meet all three goals, but people need more input from network operators. |
<<: Seven tips to help you successfully perform a domain controller network migration
On February 23, the "2021 Mobile World Congr...
Currently, business development often leads to a ...
The last time I shared information about Anynode ...
Nearly 800 million of these LTE subscriptions wer...
Stefan Pongratz, vice president and analyst at ma...
Juniper Networks, an industry leader in providing...
[[374016]] In order to better review the gratifyi...
The latest generation of Wi-Fi technology, Wi-Fi ...
As cybersecurity threats continue to evolve and e...
Global Internet giants are accelerating their pen...
"Are you planning to transfer your number to...
This is a very "pure" partner conferenc...
Number portability was once considered an importa...
Data released by market research firm QuestMobile...
[[419549]] Preface The previous article demonstra...