This article is reprinted from the WeChat public account "Kaida Neigong Xiuxian", written by Zhang Yanfei allen. To reprint this article, please contact the WeChat public account "Kaida Neigong Xiuxian". Hello everyone, I am Fei Ge! Today I bring you a comic story! 01 Hello everyone, I am a process, my name is Xiao P. Like many other guys, I was created and managed by the boss operating system. If you want to know how I got here, please keep your voice down so those application developers can’t hear me. In fact, kernel developers think that application developers are stupid and are afraid that the code developed by the application will damage the server, so they designed our process to run various user-mode codes. We are naturally isolated from our fellow kernel developers. We spend most of our time running in user mode, while the others are always running in kernel mode. We do not have access to hard drives, network cards, and other devices. If we need these functions, we need to fall into the kernel state through system calls. However, before falling into the kernel state, the system call entry must perform strict security checks on us. Well, that’s the background. Today I’m going to tell you how my good friends and I work together to handle network IO. 02 Our process communicates with our users through a guy called a socket. But in fact, all sockets and network packets on the entire machine are controlled in the kernel state, and we can only get the socket number. A long, long time ago, we generally only handled one TCP connection. We use a system call called recvfrom to read the data sent by our users. If we are lucky, we can take the data away when we recvfrom! But in fact, we have no idea when the user will send us the data packet, so in most cases we won’t be so lucky. If the data packet is not ready during read, we have to proactively give up the CPU according to the rules. However, we were only processing one connection at that time, so it was normal that no request was blocked on the connection. Later, the boss kept squeezing us, asking us to handle hundreds or even thousands of connections with one process. When we read a connection, if there is no data, we will be hung up. We can't stand it, because we still have many other connections to handle. Moreover, frequent blocking makes my work efficiency very low. First, we need to spend a lot of time to save our current work status when blocking, and second, we have prepared a lot of caches in L1/L2/L3 and other caches for work, which are now useless. Later we asked the operating system boss for help and asked him to set the connection to non-blocking. Me: "Brother, I just want to see if there is any data on this connection. If there is, give it to me. If not, don't block it, okay?" Operating system: “Sure!” Now it's good, I can use a loop to traverse all my sockets one by one and go to the kernel to check them. But my problem is that I still don't know when the user will send the data. If there is no ready data, then I can only frequently loop to ask the kernel continuously. "Go check if there is any data on socket 1?" "No." "Go check if there is any data on socket 2?" "No" "Go check if there is any data on socket 3?" "No." ... "Go check if there is any data on socket 1?" "No." "Go check if there is any data on socket 2?" "No" "Go check if there is any data on socket 3?" "Finally there is." This is such a tiring job, when I'm unlucky I have to access it thousands of times before the data actually arrives! 03 finally!!! Later, the operating system boss came up with various new system calls that support multiplexing in kernel state, namely select, poll, and epoll. But hey, I like the new guy epoll the best. I gave him all the sockets that needed to be observed, and he maintained them for me. It is said that he used an advanced technology called red-black tree internally. But actually, you can use whatever you like. I only focus on what can free up my physical energy. I finally don't have to keep polling. Every time I want to know which socket has a request, I can just enter the kernel state and check the ready queue. You really can't understand this kind of wonderful feeling. This is why I like to use this guy. If there are a lot of requests, I can keep epoll_wait to get the requests and process them without blocking. Until the time slice is exhausted, it is thrown into the ready queue again and waits for scheduling. My work efficiency has been maximized, and I can handle more and more concurrency. On redis, I can achieve up to 10W qps, how impressive! But when there is no data on all connections, I also need to block. I accept this. After all, I would feel embarrassed if the CPU resources are occupied when there is no work to do. When the network card receives my data request packet, my other brother's soft interrupt will find me through epoll's wq and wake me up. But what I call waking up is actually just pushing it into the ready queue. The real scheduling still needs to wait for the process scheduler to pull me up. Look, my cooperation with epoll, soft interrupt, process scheduler and other brothers is seamless! Conclusion Understanding kernel-level technologies like epoll will greatly improve your internal functionality. Fei Ge previously explained it from the source code level, and the response was very good. With 10,000 fans, it actually reached 5,000 readings. |
>>: Verizon expands Ultra Wideband 5G and 5G Home Internet to new cities
We are on the cusp of the blockchain era. Blockch...
At the 6G Internet of Things Forum of the 4th Chi...
One of the most important exhibitions of the year...
Recently, the three major telecom operators have ...
This year, "Digital China" was written ...
The IT world is constantly changing, with new too...
We have previously shared information about CMIVP...
Is there a data cable? My seat is in the first ro...
[Shenzhen, China, November 4, 2019] Recently, at ...
At the start of every new year, experts and forec...
I received a message from DogYun that the Korean ...
Friendhosting has launched a SysAdmin Day Sale pr...
The Wi-Fi 6 standard (802.11ax) brings many excit...
New 5G networks are increasing connectivity betwe...
Hengchuang Technology has launched a 2021 New Yea...