"Interview Eight-part Essay" Network Volume 19

"Interview Eight-part Essay" Network Volume 19

[[422375]]

1.How many layers does the TCP/IP network model have? What are their uses?


The TCP/IP network model has a total of five layers

1. Application layer: What we can access is the application layer. Mobile phones, computers and other devices all belong to the application layer.

2. Transport layer: It provides network support for the application layer. When the device is the receiver, the transport layer is responsible for transmitting the data packet to the application. However, there may be many applications on a device receiving or transmitting data, so a number is needed to distinguish the applications. This number is the port. Therefore, TCP and UDP protocols are at this layer.

3. Network layer: It is responsible for transmitting data. The most commonly used IP protocol is in this layer. The network layer is responsible for transmitting data from one device to another. There are many devices in the world, and the network layer needs to have a number to distinguish the devices. We usually use IP addresses to number devices.

4. Data link layer: Each device's network card has a MAC address, which is used to uniquely identify the device. The router calculates the next destination IP address, and then finds the destination's MAC address through the ARP protocol, so that it knows which device the IP address belongs to. The router knows which device the IP address belongs to through the data link layer, and it mainly provides link-level transmission services for the network layer.

5. Physical layer: When data is ready to be sent from a device to the network, the data packet needs to be converted into an electrical signal so that it can be transmitted in a physical medium. It mainly provides binary transmission services for the data link layer.

2. Let's introduce the HTTP protocol

The HTTP protocol is implemented based on the TCP protocol. It is a hypertext transfer protocol, which is actually a simple request-response protocol. It specifies what kind of messages the client may send to the server and what kind of response it may get.

It is mainly responsible for point-to-point communication.

Hypertext is a network of text that uses hyperlinks to organize text information in different spaces. For example, HTML has many links to pictures and videos defined inside, which will be displayed when you put them on the browser.

An agreement is a commonly known thing. For example, Moon wants to send a book to a reader, and the reader only accepts SF Express. Then Moon thinks it is okay and chooses SF Express when sending the express. Then what we have mutually agreed upon is called an agreement.

Transmission is easy to understand. For example, in the example just given, sending books to readers requires riding a bicycle or taking a plane. The process of delivery is transportation.

3. What is the difference between GET and POST?

GET and POST are essentially TCP connections, with no difference.

However, due to HTTP regulations and browser/server limitations, they show some differences in the application process.

4.What is the function of PING?

The main function of PING is to test whether a connection can be established between two hosts. If PING fails, the connection cannot be established.

It actually sends multiple ICMP echo request messages to the destination host.

  • If there is no response, the connection cannot be established.
  • If there is a response, the round-trip time and packet loss rate of the data packet can be estimated based on the time of the echo message returned by the destination host and the number of successful responses.

5. What are the common HTTP status codes?

6. What are the differences between HTTP1.1 and HTTP1.0?

1. Long links

In the early days of HTTP 1.0, each request was accompanied by a three-way handshake process, and it was a serial request, which added unnecessary performance overhead.

HTTP1.1 adds a long-link communication method to reduce performance loss

2. Pipeline

HTTP1.0 only sends serial messages, no pipelines

HTTP1.1 adds the concept of pipeline, which allows multiple requests to be sent simultaneously in the same TCP connection.

3. Resume downloading

HTTP1.0 does not support breakpoint resuming

HTTP1.1 added a range field to specify the data byte position, opening the era of breakpoint resumable transmission

4.Host header processing

HTTP1.0 task host has only one node, so HOST is not passed

In the HTTP1.1 era, virtual machine technology is becoming more and more advanced, and there may be many nodes on a machine, so HOST information is added

5. Cache processing

In HTTP 1.0, the If-Modified-Since and Expires headers are mainly used as the cache judgment standard.

HTTP1.1 introduced more cache control strategies such as Entity tag, If-Unmodified-Since, If-Match, If-None-Match and other optional cache headers to control cache strategies.

6. Error status response code

In HTTP1.1, 24 new error status response codes were added, such as 410 (Gone), which means that a resource on the server has been permanently deleted.

7. What is the difference between HTTPS and HTTP?

1.SSL security protocol

HTTP is a hypertext transfer protocol, and information is transmitted in plain text, which poses a security risk.

HTTPS solves the insecurity flaw of HTTP by adding the SSL/TLS security protocol between the TCP and HTTP network layers, allowing messages to be transmitted in encrypted form.

2. Establish a connection

It is relatively simple to establish an HTTP connection. After the TCP three-way handshake, HTTP messages can be transmitted.

After the TCP three-way handshake, HTTPS needs to go through the SSL/TLS handshake process before encrypted message transmission can begin.

3. Port number

The HTTP port number is 80.

The port number for HTTPS is 443.

4.CA Certificate

The HTTPS protocol requires applying for a digital certificate from a CA (certificate authority) to ensure that the server's identity is credible.

8. What is the difference between HTTP2 and HTTP1.1?

1. Header compression

In HTTP2, if you send multiple requests and their headers are the same, the HTTP2 protocol will help you eliminate the same parts. (In fact, it is achieved by maintaining an index table on the client and server)

2. Binary format

HTTP1.1 uses plain text

HTTP/2 fully adopts binary format, and both header information and data body are binary.

3. Data Flow

HTTP/2 data packets are not sent in order. Continuous data packets in the same connection may belong to different responses. (Data packets are marked to indicate which request they belong to. The data stream number sent by the client is odd, and the data stream number sent by the server is even. The client can also specify the priority of the data stream. The server will respond to the request with a higher priority first.)

4.IO multiplexing

For example, in a connection, the server receives two requests from clients A and B, but finds that processing A is very time-consuming, so it simply responds to the part that A has processed first, then responds to B's request, and finally responds to the remaining part of A's request.

HTTP/2 can concurrently send multiple requests or responses in one connection.

5. Server Push

The server can actively send requests to the client

9. What is the difference between HTTP3 and HTTP2?

1. Different protocols

HTTP2 is implemented based on TCP protocol

HTTP3 is implemented based on the UDP protocol

2. QUIC

HTTP3 adds the QUIC protocol to achieve reliable transmission

3. Number of handshakes

HTTP2 is implemented based on HTTPS. To establish a connection, you need to first perform a TCP 3-way handshake, and then perform a TLS 3-way handshake, for a total of 6 handshakes.

HTTP3 only requires QUIC's 3-way handshake

10. What is the process of establishing a TCP connection?

First handshake: A's TCP process creates a transmission control block TCB, and then sends a connection request segment to B. Then the synchronization bit SYN is set to 1, and an initial sequence number seq=x is selected. At this time, client A enters the SYN-SENT (synchronization sent) state.

Second handshake: B receives the connection request segment. If it agrees to establish a connection, it sends a confirmation to A. In the confirmation segment, the synchronization bit SYN=1, the confirmation bit ACK=1, the confirmation number ack=x+1, and it also selects an initial sequence number seq=y for itself. At this time, server B enters the SYN-RCVID state.

The third handshake: After A receives B's confirmation, it sends another confirmation to B. The confirmation message ACK=1, the confirmation number ack=y+1. At this time, A enters the ESTAB-LISHED state. When B receives A's confirmation, it also enters the ESTAB-LISHED state. The connection is established.

11.Why is it a three-way handshake???

1. To prevent the invalid connection request message segment from being suddenly transmitted to the server, thus causing an error

If the client sends multiple SYN connection establishment messages in succession, if there is network congestion, the old connection may arrive before the new connection, which may cause connection coverage. To avoid this situation, at least three handshakes are required.

2. Three handshakes are just enough to avoid wasting resources

Three handshakes are the theoretical minimum number of times to establish a reliable connection, so no more connections are needed.

3. Synchronize the initial sequence numbers of both parties

Synchronizing sequence numbers (to identify duplicate numbers, accept in order, etc.) does not actually require a three-way handshake, just two round trips will do.

12. What is the process of TCP disconnection?

The first wave: A first sends a connection release message segment, the termination control bit FIN in the segment header is 1, and the sequence number seq is u (equal to the last sequence number of the data sent by A before plus 1); then A enters the FIN-WAIT-1 (termination wait 1) state and waits for B's confirmation.

The second wave: After B receives the connection release segment from A, it immediately sends a confirmation segment with confirmation number ack=u+1 and sequence number seq=v (equal to the last sequence number of the data sent by B plus 1); then B enters the CLOSE-WAIT state.

The third wave: After receiving the confirmation segment from B, A enters the FIN-WAIT-2 (terminate wait 2) state and continues to wait for B to send a connection release segment;

If B has no data to send, B will send a connection release segment to A, with the termination control bit FIN=1 in the segment header, sequence number seq=w (some data may be sent in the half-closed state), and confirmation number ack=u+1. At this time, B enters the LAST-ACK (last confirmation) state and waits for A's confirmation.

The fourth wave: A receives the connection release message segment from B and sends a confirmation, in which the confirmation bit ACK=1, the confirmation number ack=w+1, and the sequence number seq=u+1; then A enters the TIME-WAIT state. When B receives the confirmation segment again, B enters the CLOSED state.

13. Why do we have to wait for 2MSL (60s) for the fourth wave?

First, the 2MSL time starts from the time when the client (A) sends ACK after receiving FIN. If the ACK from the client (A) is not transmitted to the server (B) during the TIME-WAIT time, and the client (A) receives the FIN message resent by the server (B), then the 2MSL time will be reset. The reasons for waiting for 2MSL are as follows:

1. The original connected data packet disappears

1) If B does not receive its own ACK, it will time out and retransmit FiN. Then A will receive the retransmitted FIN again and send ACK again

2) If B receives its own ACK, it will not send any more messages.

After the last wave, A does not know whether B received his message.

Including ACK, A needs to wait for both of the above situations. The maximum waiting time of these two situations should be taken to deal with the worst situation. The worst situation is: the maximum survival time (MSL) of the outgoing ACK message + the maximum survival time (MSL) of the incoming FIN message. This is exactly 2MSL, which is enough to make the original connected data packet disappear in the network.

2. Ensure that the ACK is received by the server so that the connection is closed correctly

Because this ACK may be lost, the server will not receive the FIN-ACK confirmation message. Assuming that the client does not wait for 2MSL, but directly releases the close after sending the ACK, once the ACK is lost, the server will not be able to enter the closed connection state normally.

14.Why wave four times?

Because TCP can receive data while sending data, to achieve reliable connection closing, A sends a FIN message. After receiving B's confirmation, A knows that it has no data to send. B knows that A will no longer send data and will not receive data. However, A can still receive data and B can also send data. When B sends a FIN message, the two sides will truly disconnect and read and write will be separated.

15. What is the TCP sliding window?

TCP requires a confirmation response for each data transmission. The next data transmission is sent only after the previous data transmission receives a response, which is very inefficient. Therefore, the concept of sliding window is introduced.

In fact, it is to set up a cache interval on the sender to cache messages that have been sent but not confirmed. If a window can send 5 TCP segments, then the sender can send 5 TCP segments continuously, and then cache the data of these 5 TCP segments. These 5 TCP segments are in order. As long as the later message receives ACK, it means success regardless of whether the previous one receives ACK. The window size is determined by the receiver.

The window size refers to the size of data that can be sent without waiting for a response.

16. What if the sender keeps sending data but the receiver can't handle it? (Flow Control)

If the receiver cannot handle it, the sender will trigger the retry mechanism to send the data again. However, this will cause performance loss. To solve this problem, TCP proposes flow control so that the sender can know the processing capacity of the receiver.

That is to say, each time the receiver receives data, it will tell the sender the size of the remaining processable data.

For example, if the available sliding window size of the receiver is 400 bytes and the sender sends 100 bytes of data, then the remaining available sliding window size of the receiver is 300 bytes. This allows the sender to know the size range of the data to be sent back next time.

But there is a problem here. The data will be stored in the buffer, but this buffer is controlled by the operating system. When the system is busy, the buffer will be reduced, which may cause packet loss.

For example: The window size of the sender and the receiver is 200 bytes each. The sender sends 100 bytes to the receiver. At this time, both sides have 100 bytes left, but the operating system is very busy at this time and reduces the receiver's buffer by 50 bytes. At this time, the receiver will tell the sender that I still have 50 bytes available, but before the receiver sends it, the sender does not know it and only sees that it still has 100 bytes available, so it continues to send data. If 80 bytes of data are sent, then the receiver's buffer size is 50 bytes, and 30 bytes of data will be lost, that is, packet loss will occur.

We will find that the reason for this problem is that the buffer is reduced and the window size is shrunk, so TCP does not allow the buffer to be reduced and the window to be shrunk at the same time.

17. What are TCP semi-connection queues and full-connection queues?

After receiving the SYN request from the client, the server will store the connection information in the semi-link queue (SYN queue).

After the server receives the ACK of the third handshake, the kernel removes the connection from the semi-connection queue, creates a new complete connection, and adds it to the full connection queue (accept queue), waiting for the process to call the accept function to take out the connection.

Both queues have size limits. When the capacity is exceeded, the link will be dropped or a RST packet will be returned.

18. How does sticking/unsticking happen? How to solve this problem?

When TCP sends data, it divides the packets according to the actual situation of the TCP buffer. A complete packet may be split into multiple packets by TCP for sending, or it may encapsulate multiple small packets into a large data packet for sending. This is the TCP packet sticking and unpacking problem.

The reasons why TCP sticky packets occur:

1. The sent data is smaller than the TCP buffer size. When TCP sends the data in the buffer (the data belongs to multiple business contents) at one time, packet sticking may occur.

2. If the application layer at the receiving end fails to read the data in the receiving buffer in time, packet sticking will occur.

TCP depacketization occurs because:

1. The data to be sent is larger than the maximum message length, and TCP will unpack it before transmission.

2. If the data sent is larger than the remaining space in the TCP send buffer, packet unpacking will occur.

Solution:

1. The sender adds a packet header to each data packet, which contains the length of the data packet. After receiving the data, the receiver can know the actual length of each data packet through this field.

2. The sender sets a fixed length for each data packet, so that the receiver splits each data packet each time it reads the fixed-length data.

3. Boundaries can be set between data packets, such as adding special symbols, and the receiving end can split the packets based on this special symbol.

19. What happens after you enter the website in the browser address bar and press Enter?

1: Parse the URL and generate HTTP request information

2: Query the real requested IP address based on the DNS server, and return it directly if the local server has a cache

3: After obtaining the IP, send a TCP connection to the server, and the TCP connection goes through a three-way handshake.

4: After receiving the TCP message, process the connection and parse the HTTP protocol

5: The server returns a response

6: The browser accepts the response, displays the page, and renders the page

This article is reproduced from the WeChat public account "moon chat technology"

<<:  AT&T 5G is powering 'massive' enterprise IoT

>>:  Weibu Online OneDNS helps the real estate industry with network security

Recommend

2022, 6G development continues to heat up

Development of 6G networks is gathering pace, wit...

Wi-Fi CERTIFIED Vantage adds support for the latest Wi-Fi features

Recently, Wi-Fi Alliance launched new features fo...

Why do I always see pop-up ads? Yes, it’s a DNS problem

What is DNS? Each IP address can have a host name...

Blockchain: a panacea for wealth or deadly arsenic?

Blockchain has been talked about a lot recently. ...

Let’s talk about protocols and hard drives in the Web3 world: IPFS

In the Web2.0 world, the protocol is usually HTTP...