A must-have for interviews! 15 classic questions about TCP protocol!

A must-have for interviews! 15 classic questions about TCP protocol!

[[410649]]

Preface

TCP protocol is a must-know knowledge point in interviews with large companies. I have sorted out 15 very classic TCP interview questions. I hope everyone can find their ideal offer.

1. Talk about the TCP three-way handshake process

At first, both the client and the server are in the CLOSED state, and then the server starts listening to a port and enters the LISTEN state.

  • After the first handshake (SYN=1, seq=x), the client enters the SYN_SEND state.
  • After the second handshake (SYN=1, ACK=1, seq=y, ACKnum=x+1) is sent, the server enters the SYN_RCVD state.
  • The third handshake (ACK=1, ACKnum=y+1), after sending, the client enters the ESTABLISHED state. When the server receives this packet, it also enters the ESTABLISHED state. The TCP handshake means that data transmission can begin.

2. Why is the TCP handshake three times, not two? Not four?

Why does the TCP handshake take place three times? To make it easier to understand, let's take a love relationship as an example: the most important thing for two people to get together is to love each other, that is, I love you, and I know that you love me too. Next, we will use this to simulate the three-way handshake process:

Why can't the handshake be two times?

If there are only two handshakes, the girl may not know whether the boy received her "I love you too", and the relationship cannot develop happily.

Why can't the handshake be four times?

Why can't we shake hands four times? Because three times is enough. Three times is enough for both parties to know: you love me, and I love you. Four times is redundant.

3. Talk about the TCP four-wave process

  1. After the first wave (FIN=1, seq=u), the client enters the FIN_WAIT_1 state.
  2. After the second wave (ACK=1, ack=u+1, seq =v) is sent, the server enters the CLOSE_WAIT state. After the client receives this confirmation packet, it enters the FIN_WAIT_2 state.
  3. After the third wave (FIN=1, ACK1, seq=w, ack=u+1) is sent, the server enters the LAST_ACK state and waits for the last ACK from the client.
  4. The fourth wave (ACK=1, seq=u+1, ack=w+1), the client receives the closing request from the server, sends a confirmation packet, and enters the TIME_WAIT state. After waiting for a fixed time (two maximum segment lifecycles, 2MSL, 2 Maximum Segment Lifetime), it does not receive the ACK from the server, and thinks that the server has closed the connection normally, so it also closes the connection and enters the CLOSED state. After receiving this confirmation packet, the server closes the connection and enters the CLOSED state.

4. Why does TCP need to wave four times?

Let me give you an example!

★ Xiao Ming and Xiao Hong were chatting on the phone. When the call was almost over, Xiao Hong said, "I have nothing more to say," and Xiao Ming replied, "I understand." However, Xiao Ming might still have something to say, and Xiao Hong couldn't ask Xiao Ming to end the call at her own pace, so Xiao Ming might talk a lot, and finally Xiao Ming said, "I'm done," and Xiao Hong replied, "I understand," and the call was considered over. "

5. Why does the TIME-WAIT state need to wait for 2MSL?

2MSL, 2 Maximum Segment Lifetime, that is, two maximum segment life cycles

  • 1 MSL ensures that the last ACK message from the active closing party in the four waves can eventually reach the other end
  • 1 MSL ensures that if the other end does not receive the ACK, the retransmitted FIN message can arrive.

6. Differences between TCP and UDP

  • TCP is connection-oriented (such as dialing to establish a connection before making a phone call); UDP is connectionless, that is, no connection needs to be established before sending data.
  • TCP requires security and provides reliable services. Data transmitted through TCP connections will not be lost or duplicated, and will be safe and reliable. However, UDP makes its best efforts to deliver data, which means it does not guarantee reliable delivery.
  • TCP is a point-to-point connection, UDP can be one-to-one, one-to-many, or many-to-many
  • TCP transmission efficiency is relatively low, while UDP transmission efficiency is high. It is suitable for communication or broadcast communication that requires high-speed transmission and real-time performance.
  • TCP is suitable for web pages, emails, etc.; UDP is suitable for video, voice broadcasting, etc.
  • TCP is byte stream oriented, UDP is message oriented

7. What are the fields in the TCP message header? Explain their functions

  • 16-bit port number: source port number, where the host message segment comes from; destination port number, which upper layer protocol or application is to be transmitted to
  • 32-bit sequence number: The number of each byte in a byte stream in a certain transmission direction during a TCP communication (from TCP connection establishment to disconnection).
  • 32-bit confirmation number: used as a response to the TCP segment sent by the other party. Its value is the sequence number of the received TCP segment plus 1.
  • 4-bit header length: indicates how many 32-bit words (4 bytes) are in the TCP header. Since 4 bits can identify a maximum of 15, the maximum length of the TCP header is 60 bytes.
  • 6-bit flag: URG (whether the urgent pointer is valid), ACK (indicates whether the confirmation number is valid), PSH (the buffer is not full), RST (indicates that the other party is required to re-establish the connection), SYN (establishment message flag), FIN (indicates that the other party is going to close the connection)
  • 16-bit window size: It is a means of TCP flow control. The window here refers to the receive notification window. It tells the other party how many bytes of data the local TCP receive buffer can accommodate, so that the other party can control the speed of sending data.
  • 16-bit checksum: filled by the sender, the receiver performs a CRC algorithm on the TCP segment to check whether the TCP segment is damaged during transmission. Note that this check includes not only the TCP header, but also the data part. This is also an important guarantee for TCP reliable transmission.
  • 16-bit urgent pointer: a positive offset. It is added to the value of the sequence number field to indicate the sequence number of the next byte of the last urgent data. Therefore, to be precise, this field is the offset of the urgent pointer relative to the current sequence number, which can be called the urgent offset. TCP's urgent pointer is a method for the sender to send urgent data to the receiver.

8. How does TCP ensure reliability?

  • First, TCP connection is based on three handshakes, while disconnection is based on four handshakes, ensuring the reliability of connection and disconnection.
  • Secondly, the reliability of TCP is also reflected in its state; TCP will record which data has been sent, which data has been received, and which data has not been received, and ensure that the data packets arrive in order and that there are no errors in data transmission.
  • Thirdly, the reliability of TCP is also reflected in its controllability. It has mechanisms such as message verification, ACK response, timeout retransmission (sender), retransmission of out-of-order data (receiver), discarding duplicate data, flow control (sliding window) and congestion control.

9. TCP retransmission mechanism

Timeout retransmission

In order to achieve reliable transmission, TCP implements a retransmission mechanism. The most basic retransmission mechanism is timeout retransmission, that is, when sending a data message, a timer is set, and if no ACK confirmation message is received from the other party at a certain interval, the message will be resent.

What is the general setting for this interval? Let's first look at what RTT (Round-Trip Time) is.

RTT is the time it takes for a data packet to be sent out and returned, that is, the round trip time of the data packet. The timeout retransmission time is Retransmission Timeout, referred to as RTO.

How long is the RTO set?

  • If the RTO is relatively small, it is very likely that the data is not lost and is resent, which will cause network congestion and lead to more timeouts.
  • If the RTO is large and the flowers have all withered but are not resent, the effect will be poor.

Generally, RTO is slightly larger than RTT, which is the best. Some friends may ask, is there a formula to calculate the timeout? Yes! There is a standard method to calculate the RTO formula, also known as the Jacobson/Karels algorithm. Let's take a look at the formula for calculating RTO.

1. Calculate SRTT first (calculate smoothed RTT)

  1. SRTT = (1 - α) * SRTT + α * RTT //Calculate the weighted average of SRTT

2. Calculate RTTVAR (round-trip time variation)

  1. RTTVAR = (1 - β) * RTTVAR + β * (|RTT - SRTT|) //Calculate the difference between SRTT and the true value

3. Final RTO

  1. RTO = µ * SRTT + ∂ * RTTVAR = SRTT + 4·RTTVAR

Among them, α = 0.125, β = 0.25, μ = 1, ? = 4, these parameters are the optimal parameters obtained from a large number of results.

However, timeout retransmission has the following disadvantages:

  • When a segment is lost, the packet is retransmitted after waiting for a certain timeout period, which increases the end-to-end delay.
  • When a segment is lost, during the waiting timeout, the following situation may occur: the subsequent segments have been received by the receiver but have not been confirmed for a long time. The sender will think that they are also lost, causing unnecessary retransmission, which wastes resources and time.

In addition, TCP has a strategy that the timeout interval will be doubled. Timeout retransmission requires a long wait. Therefore, a fast retransmission mechanism can also be used.

Fast Retransmit

The fast retransmit mechanism is not time-driven but data-driven. It triggers retransmission based on feedback from the receiving end.

Let's take a look at the fast retransmission process:

Fast retransmit process

The sender sent 1, 2, 3, 4, 5, 6 copies of data:

  • The first packet Seq=1 is delivered first, so Ack returns 2;
  • The second packet Seq=2 is also delivered. Assuming it is normal, the ACK returns 3.
  • The third copy, Seq=3, was not delivered due to network or other reasons;
  • The fourth packet Seq=4 is also delivered, but Seq3 is not received. So the ACK returns 3;
  • The subsequent Seq=4,5 are also delivered, but the ACK still replies to 3 because Seq=3 is not received.
  • When the sender receives three duplicate redundant ACK=3 acknowledgments in succession (actually 4, but the first one is a normal ACK, and the last three are duplicate redundant ACKs), it knows which segment is lost during transmission, so it retransmits the segment before the timer expires.
  • Finally, Seq3 is received. At this time, because Seq=4, 5, and 6 have all been received, ACK is returned to 7.

But there may be a problem with fast retransmission: ACK only tells the sender the largest ordered message segment. Which message is lost? It is not certain! So how many packets should be retransmitted?

Should it retransmit Seq3? Or should it retransmit Seq3, Seq4, Seq5, and Seq6? Because the sender does not know who sent back these three consecutive ACK3s.

Retransmission with Selective Acknowledgement (SACK)

In order to solve the problem of fast retransmission: how many packets should be retransmitted? TCP provides the SACK method (Selective Acknowledgment).

The SACK mechanism is that, based on fast retransmission, the receiver returns the sequence number range of the most recently received segments, so that the sender knows which packets the receiver has not received, and thus knows which packets to retransmit. The SACK marker is added to the TCP header option field.

SACK mechanism

As shown in the figure above, the sender receives the same ACK=30 confirmation message three times, which triggers the fast retransmission mechanism. Through the SACK information, it is found that only the data segment 30~39 is lost, so only the TCP segment 30~39 is selected for retransmission.

D-SACK

D-SACK, or Duplicate SACK, is an extension of SACK. It is mainly used to tell the sender which packets it has received repeatedly. The purpose of DSACK is to help the sender determine whether there is packet disorder, ACK loss, packet duplication or false retransmission. This allows TCP to better perform network flow control. Let's take a look at the picture:

D-SACK Brief Process

10. Let’s talk about TCP’s sliding window

TCP sends a piece of data and needs to receive a confirmation response before sending the next piece of data. This has a disadvantage, which is that the efficiency is relatively low.

It's like we're chatting face to face, you say one sentence, I respond, and then you say the next sentence. So, if I'm busy with other things and can't reply you in time, after you finish your sentence, you have to wait until I finish my work and reply before you say the next sentence, which is obviously unrealistic. "

To solve this problem, TCP introduces a window, which is a buffer space created by the operating system. The window size value indicates the maximum value of data that can be sent without waiting for a confirmation response.

The TCP header has a field called win, which is the 16-bit window size. It tells the other party how many bytes of data the local TCP receive buffer can accommodate, so that the other party can control the speed of sending data, thereby achieving the purpose of flow control.

In simple terms, every time the receiver receives a data packet, it tells the sender how much free space is left in its buffer when sending a confirmation message. The free space in the buffer is called the receive window size. This is win.

The TCP sliding window is divided into two types: the send window and the receive window. The sender's sliding window consists of four parts, as follows:

  • Sent and ACK received
  • Sent but no ACK received
  • Not sent but can be sent
  • Not sent and cannot be sent

  • The dotted rectangle is the sending window.
  • SND.WND: Indicates the size of the send window. The number of grids in the dotted box in the above figure is 14.
  • SND.UNA: An absolute pointer to the sequence number of the first byte sent but not yet acknowledged.
  • SND.NXT: The next send position, which points to the sequence number of the first byte that has not been sent but can be sent.

The receiver's sliding window consists of three parts, as follows:

  • Successfully received and confirmed
  • No data received but can be received
  • Data not received and data that cannot be received

  • The dotted rectangle is the receiving window.
  • REV.WND: Indicates the size of the receiving window. The dotted box in the above figure means there are 9 windows.
  • REV.NXT: The next received position, which points to the sequence number of the first byte that has not been received but can be received.

11. Let’s talk about TCP flow control

After the TCP three-way handshake, the sender and receiver enter the ESTABLISHED state, and they can happily transmit data.

However, the sender cannot send data to the receiver frantically, because if the receiver cannot receive all the data, the receiver can only store the data that cannot be processed in the buffer. If the buffer is full and the sender is still sending data frantically, the receiver can only discard the received data packets, which wastes network resources.

TCP provides a mechanism that allows the sender to control the amount of data sent based on the actual receiving capacity of the receiver. This is flow control.

TCP controls traffic through a sliding window. Let's take a look at the brief process of traffic control:

First, the two parties perform three-way handshake and initialize their respective window sizes, both of which are 400 bytes.

TCP Flow Control

  • If the current sender sends 200 bytes to the receiver, then the sender's SND.NXT will shift right by 200 bytes, which means that the current available window is reduced by 200 bytes.
  • After receiving the message, the receiver puts it into the buffer queue, REV.WND = 400-200 = 200 bytes, so win = 200 bytes are returned to the sender. The receiver will add the reduced sliding window of 200 bytes to the header of the ACK message.
  • The sender sends another 200 bytes, and the 200 bytes arrive and continue to be placed in the buffer queue. However, at this time, due to the large load, the receiver cannot process so many bytes and can only process 100 bytes. The remaining 100 bytes continue to be placed in the buffer queue. At this time, REV.WND = 400-200-100 = 100 bytes, that is, win = 100 is returned to the sender.
  • The sender continues to work and sends 100 bytes. At this time, the receiving window win becomes 0.
  • The sender stops sending and starts a scheduled task. It queries the receiver at regular intervals until win is greater than 0, then it continues sending.

12. TCP Congestion Control

Congestion control acts on the network to prevent too many data packets from being injected into the network and avoid the situation where the network is overloaded. Its main goal is to maximize the bandwidth of the bottleneck link on the network. What is the difference between it and flow control? Flow control acts on the receiver, controlling the sending speed according to the actual receiving capacity of the receiver to prevent packet loss.

We can compare a network link to a water pipe. If we want to maximize the use of the network to transmit data, we need to make the water pipe reach the optimal full state as quickly as possible.

The sender maintains a variable called congestion window cwnd (congestion window) to estimate the amount of data (water) that this link (water pipe) can carry and transport over a period of time. Its size represents the degree of congestion in the network and changes dynamically. However, in order to achieve the maximum transmission efficiency, how do we know the transmission efficiency of this water pipe?

A relatively simple method is to continuously increase the amount of water transmitted until the pipe is about to burst (which corresponds to packet loss on the network). The TCP description is:

★As long as there is no congestion in the network, the value of the congestion window can be increased to send more data packets, but as long as there is congestion in the network, the value of the congestion window should be reduced to reduce the number of data packets injected into the network. "

In fact, there are several common congestion control algorithms:

  • Slow Start
  • Congestion Avoidance
  • Congestion occurs
  • Fast recovery

Slow start algorithm

The slow start algorithm, on the surface, means, don't rush and take your time. It means that after TCP establishes a connection, do not send a large amount of data at the beginning, but first detect the congestion level of the network. Gradually increase the size of the congestion window from small to large. If there is no packet loss, the size of the congestion window cwnd is increased by 1 (in MSS) for each ACK received. The sending window doubles in each round, increasing exponentially. If packet loss occurs, the congestion window is halved and enters the congestion avoidance phase.

  • The TCP connection is completed and cwnd is initialized to 1, indicating that data of a size of one MSS unit can be transmitted.
  • Every time an ACK is received, cwnd is incremented by one;
  • Every time an RTT passes, cwnd doubles;

In order to prevent network congestion caused by excessive growth of cwnd, a slow start threshold ssthresh (slow start threshold) state variable needs to be set. When cwnd reaches this threshold, it is like turning down the tap of a water pipe to reduce congestion. That is, when cwnd > ssthresh, the congestion avoidance algorithm is entered.

Congestion Avoidance Algorithm

Generally speaking, the slow start threshold ssthresh is 65535 bytes. After cwnd reaches the slow start threshold

  • Each time an ACK is received, cwnd = cwnd + 1/cwnd
  • When each RTT passes, cwnd = cwnd + 1

Obviously, this is a linearly increasing algorithm to avoid network congestion caused by too fast growth.

Congestion occurs

When network congestion causes packet loss, there are two situations:

  • RTO timeout retransmission
  • Fast Retransmit

If an RTO timeout retransmission occurs, the congestion occurrence algorithm will be used

  • Slow start threshold sshthresh = cwnd /2
  • cwnd is reset to 1
  • Entering a new slow start process

This is really a return to the pre-liberation era after decades of hard work. In fact, there is a better way to deal with it, which is fast retransmission. When the sender receives three consecutive repeated ACKs, it will retransmit quickly without waiting for the RTO to time out before retransmitting.

image.png

The slow start thresholds ssthresh and cwnd change as follows:

  • Congestion window size cwnd = cwnd/2
  • Slow start threshold ssthresh = cwnd
  • Enter the fast recovery algorithm

Fast recovery

Fast retransmit and fast recovery algorithms are usually used at the same time. The fast recovery algorithm believes that since three duplicate ACKs have been received, the network is not that bad, so there is no need to be as aggressive as RTO timeout.

As mentioned before, before entering fast recovery, cwnd and sshthresh are updated:

  1. - cwnd = cwnd /2
  2. -sshthresh=cwnd

Then, the really fast algorithm is as follows:

  • cwnd = sshthresh + 3
  • Retransmit the duplicate ACKs (i.e. the lost packets)
  • If a duplicate ACK is received, then cwnd = cwnd +1
  • If the ACK of new data is received, cwnd = sshthresh. Because the ACK of new data is received, it indicates that the recovery process has been completed and the congestion avoidance algorithm can be entered again.

13. Relationship between semi-connection queue and SYN Flood attack

Before TCP enters the three-way handshake, the server will change from the CLOSED state to the LISTEN state, and create two queues internally: a semi-connection queue (SYN queue) and a full-connection queue (ACCEPT queue).

What is a semi-connection queue (SYN queue)? What is a full-connection queue (ACCEPT queue)? Recall the TCP three-way handshake diagram:

Three-way handshake

  • During the TCP three-way handshake, the client sends SYN to the server. After receiving it, the server replies with ACK and SYN, and the state changes from LISTEN to SYN_RCVD. At this time, the connection is pushed into the SYN queue, that is, the semi-connected queue.
  • When the client replies ACK and the server receives it, the three-way handshake is completed. At this time, the connection will wait to be taken away by a specific application. Before being taken away, it is pushed into the ACCEPT queue, which is the full connection queue.

SYN Flood is a typical DoS (Denial of Service) attack, which forges non-existent IP addresses and sends a large number of SYN messages to the server in a short period of time. When the server replies to the SYN+ACK message, it will not receive the ACK response message, resulting in a large number of half-connections on the server. The half-connection queue is full, and it cannot process normal TCP requests.

  • The main solutions include syn cookies and SYN Proxy firewall.
  • syn cookie: After receiving the SYN packet, the server calculates a cookie value as the sequence number of its own SYNACK packet based on a certain method, using the source address, port and other information of the data packet as parameters. After replying SYN+ACK, the server does not immediately allocate resources for processing. After receiving the sender's ACK packet, it recalculates whether the confirmation sequence number in the packet is correct based on the source address and port of the data packet. If it is correct, the connection is established, otherwise the packet is discarded.

SYN Proxy Firewall: The server firewall will proxy and respond to each SYN message received and maintain a semi-connection. After the sender returns the ACK packet, it will reconstruct the SYN packet and send it to the server to establish a real TCP connection.

14. Nagle Algorithm and Delayed Confirmation

Nagle's algorithm

If the sender frantically sends very small packets to the receiver, such as 1 byte, then dear friends, what problems do you think will arise?

In the TCP/IP protocol, no matter how much data is sent, a protocol header must always be added in front of the data. At the same time, when the other party receives the data, it also needs to send an ACK to indicate confirmation. In order to make the best use of the network bandwidth, TCP always hopes to send as much data as possible. The Nagle algorithm is to send as large a block of data as possible to avoid the network being filled with many small data blocks.

The basic definition of Nagle's algorithm is: at any time, there can be at most one unconfirmed small segment. The so-called "small segment" refers to a data block smaller than the MSS size, and the so-called "unconfirmed" means that after a data block is sent out, no ACK is received from the other party to confirm that the data has been received.

Implementation rules of Nagle's algorithm:

  • If the packet length reaches the MSS, it is allowed to be sent;
  • If the packet contains FIN, it is allowed to be sent;
  • If the TCP_NODELAY option is set, sending is allowed;
  • When the TCP_CORK option is not set, if all small data packets (packet length less than MSS) sent out are confirmed, they are allowed to be sent;
  • If none of the above conditions are met, but a timeout occurs (usually 200ms), it will be sent immediately.

Delayed confirmation

If the receiver just received a data packet from the sender, and then received a second packet in a very short time, should the receiver reply one by one or combine them and reply together?

★After receiving the data packet, if the receiver has no data to send to the other end, it can wait for a period of time before confirming (the default is 40ms on Linux). If there is data to be transmitted to the other end during this period, the ACK is transmitted along with the data, and there is no need to send a separate ACK. If there is no data to be sent after the time, the ACK is also sent to avoid the other end thinking that the packet is lost.

However, there are some scenarios where confirmation cannot be delayed, such as when out-of-order packets are detected, when a message larger than one frame is received, and when the window size needs to be adjusted.

Generally speaking, the Nagle algorithm and delayed confirmation cannot be used together. The Nagle algorithm means delayed sending, and delayed confirmation means delayed receiving, which will cause greater delays and performance problems.

15. TCP Packet Gluing and Unpacking

TCP is a stream-oriented, unbounded string of data. The TCP bottom layer does not understand the specific meaning of the upper layer business data. It will divide the packets according to the actual situation of the TCP buffer. Therefore, in terms of business, a complete packet may be split into multiple packets by TCP for transmission, or multiple small packets may be encapsulated into a large data packet for transmission. This is the so-called TCP packet sticking and unpacking problem.

TCP Packet Gluing and Unpacking

Why does sticking and unpacking occur?

  • The data to be sent is smaller than the size of the TCP send buffer. TCP will send the data written into the buffer multiple times at once, which will cause packet sticking.
  • If the application layer at the receiving end fails to read the data in the receiving buffer in time, packet sticking will occur;
  • The data to be sent is larger than the remaining space in the TCP send buffer, so depacketization will occur;
  • If the data to be sent is larger than the MSS (maximum message length), TCP will unpack it before transmission. That is, TCP message length - TCP header length > MSS.

Solution:

The sender encapsulates each data packet into a fixed length

Add special characters at the end of the data to split it

The data is divided into two parts, one is the header and the other is the content body; the header structure has a fixed size and has a field that declares the size of the content body.

References

[1] TCP (Part 2): https://coolshell.cn/articles/11609.html

[2] What you need to know about TCP congestion control for interview headlines: https://zhuanlan.zhihu.com/p/76023663

[3] 30 diagrams: TCP retransmission, sliding window, flow control, congestion control: https://zhuanlan.zhihu.com/p/133307545

[4]Soul-searching questions about TCP protocol: Strengthen your network infrastructure: https://juejin.cn/post/6844904070889603085

[5] TCP packet sticking and unpacking: https://blog.csdn.net/ailunlee/article/details/95944377

This article is reprinted from the WeChat public account "Little Boy Picking Snails", which can be followed through the following QR code. To reprint this article, please contact the public account of Little Boy Picking Snails.

<<:  Wang Wei of SINO-Info: Five years of CDM efforts have made “using and managing data well” a reality

>>:  The battle for 5G wide-area coverage has begun. Whose future do you think will be better?

Recommend

A brief discussion of the TCP protocol, finally understand what it does

[[276056]] 1. What is TCP and what does it do? TC...

Let’s talk about how to implement RPC remote service calls?

Overview In the previous article, I introduced ho...

The iPhone 12 finally uses 5G, but is it really too late?

At the Apple conference this morning, the most ex...

4G network secretly slows down to protect 5G? You really wronged them

Recently, the issue of 4G network speed reduction...

CloudCone: $17.77/year KVM-512MB/100GB/3TB/Los Angeles MC Data Center

CloudCone's large hard disk VPS host is back ...

gRPC services communicating through the Istio mesh

[[433796]] introduction This article verifies the...

How can you explain the communication protocol in such a simple way?

This article is reprinted from the WeChat public ...