I always thought it was a bit strange that a company like Google, which had come up with DC-tax a few years ago, would use Snap and Swift to play with ordinary network cards. But in the end, the partner turned out to be Cherry~ In fact, when it comes to hardware, the DPUs of various companies are similar, ARM plus programmable logic and some encryption, decryption, compression, hashing, regularization and other offload engines. What I am looking forward to is the capabilities that Google will provide for Cherry IPU in software. I am looking forward to Google's next paper to see if it is similar to our NetDAM~ If we look at the evolution of the protocol stack, Google will definitely place the entire Snap framework on the IPU: In fact, I agree with Google's approach on this point. The TCP/IP protocol stack is too heavy for containers or virtual machine networks. The TCP over RDMA over TCP approach of a certain DPU is not a very clean and beautiful processing method from the application point of view. It may be determined by the Java ecosystem of a certain factory or a large number of tenants of a certain cloud. Compared with Google and domestic companies such as Baidu Toutiao that are gradually turning to the Go ecosystem, directly using Memory-mapped I/O to carry applications and bypassing the container's protocol stack is a relatively better choice. This is also one of the reasons why we built the netDAM project, that is, to fully realize user-mode memory delivery. Of course, the integration of software and hardware is inevitable. I have recently been writing a software version of NetDAM, which is similar to the processing method of Google Snap and provides a native user-mode Bypass kernel packet receiving and sending mechanism for Golang. Although Cherry had previously launched a nff-go project to combine DPDK and Golang, it seems to have ended in failure, mainly because it could not provide native support for Golang, and cgo compilation and execution were also very troublesome. What I have been doing recently is to create a vhost-user interface based on DPDK to provide Kernel with operations and remote management support for protocols such as ARP and TCP, and at the same time provide a Memif to provide golang with native user-mode packet sending and receiving capabilities. Rawsocket packet sending and receiving processing will be delivered first, mainly to provide user-mode SID encoding support capabilities for some users using SRv6. Of course, VPP also has a similar framework, including providing memif in an integrated way with Calico, but it is too cumbersome, and the application side does not need so many network-related functions, and application operation and maintenance are also difficult. Of course, the next step is to expand NetDAM and Ruta to provide RPC services for ordinary applications or Serverless. Another problem is edge computing, the integration of computing and communication. It seems that we are following the old path of IPv9. The publication of a book like "Decimal Network Technology and Applications" is a question worth reflecting on. It was so interesting to see that the book allocates address space for cells and atoms. Why do people say that we are going down the old path of IPv9? For example, when a certain NewIP first came out, it proposed variable-length addresses. The decimal network also supports 16, 32, 64, 128, 256, 512, and 1024-bit addresses. Then I lay on the ground and thought about it. Isn't a 1024-bit address equal to 8 IPv6 SIDs? Now a certain device needs to provide 12 SIDs, which is more powerful than IPv9. I remembered the last sentence of IPv9 RFC Those who do not study history are doomed to repeat it. The reason why we want to talk about this is that it seems that many of our engineers want to solve some problems through IP addressing. It is true that some problems can be solved very beautifully and cleanly, but it seems that many engineers have not considered the hardware processing capacity after adding a certain length. In the end, it may look beautiful in some scenarios, but it cannot meet the capacity requirements when implemented. For example, when we were designing NetDAM, we also thought about using NDP for reliable transmission in the network-side protocol design, or earlier, using CXL or PCIe5.0 directly over Ethernet to achieve remote transmission as many people imagined. However, when you consider the timing limitations, addressing limitations, and some more hardware limitations, such as coherence delay limitations, PCIe NAK time limitations, and other factors such as ordinary host access and reuse, we ultimately chose UDP over Ethernet, which can be used by everyone, to provide services in the most standardized way. Therefore, when formulating any network transmission standard, don’t think that you can just do some coding. You should consider more from the capabilities of the underlying hardware to the controllability of high-level applications. Too many Option headers and too many branch processing will inevitably increase the complexity of the hardware, but it will eventually withdraw from the stage of history because it does not meet the demand in terms of capacity and cost. Many standard codes are very clever, such as BIER, but because of the high difficulty of operation and maintenance and the unintuitive troubleshooting, it has been pushed for many years and ultimately has not been accepted by the market. There are more things that the purpose of network communication is to serve applications. The essence of programmable networks is to enable applications to be programmable, but it was eventually discovered that some headers require root permissions to be modified. How can this technology be accepted by applications? Finally, I would like to quote an old saying from a few years ago when reporting on IPv9: Stop being too ambitious in China all day long. Look at the gap. We don’t have much time left to make progress. In the United States, where the Internet originated, people have not discussed IPV9. Vint Cerf, the founder of the Internet, said that this "noise" that appeared in the development of the Internet was because its proposers had not yet understood the working principle of the Internet. He also mentioned: "RFC 1606 was recognized as an April Fool's joke a long time ago. IPV9 is completely from the author's imagination. As an Internet architecture, IETF has never recognized IPV9. IPV9 violates the rules of the domain name system." Why won't foreign countries be "fooled" by technologies like IPV9? Wu Jianping believes that this is related to the Internet cultural literacy at home and abroad and the decision-making process of social governance. First, the popularization of Internet knowledge can make people have a clear understanding of the core Internet technology, so they will not be easily "fooled"; second, many decisions need to be made by professionals, but in recent years, some people have been eager for quick success and are easily fooled by these people. |
Learn how the famous Nginx server (an essential f...
[Beijing, China, October 13, 2020] Today, the 6th...
[[393747]] When it comes to 5G, is your first rea...
The official development of ACI began in January ...
Part 01 Physical Layer If a computer wants to tra...
In addition to the support of basic network cable...
The tribe often shares information about RAKsmart...
Since its advent in 2009, the DevOps philosophy h...
LOCVPS has newly launched a Hong Kong MG (BGP/Int...
TCP Message Format TCP (Transmission Control Prot...
According to Google user statistics, as of June t...
On November 1, several major domestic operators o...
According to IDC's latest survey data, global...
This article is reprinted with permission from AI...
Here are 8 ways to fix a slow Internet connection...