Everything You Should Know About Computer Networks for Your Job Interview

Everything You Should Know About Computer Networks for Your Job Interview

"Are you ready for the interview?"

"Of course. Please start."

"Great. Can you tell me a little bit about what you know about computer networks? Maybe we can start with what TCP is and how it works."

"Hmm... Hmm... Hmm..."

"Okay, thank you for your time today. Please wait for further notice in the next few days."

Does this seem similar to your experience?

Don't give up, you can definitely do better.

Here are some common computer networking questions you will encounter in technical interviews. Most of them are related to application, transport, and network layers. You can follow this guide to learn computer networking from scratch and explain them clearly to the interviewer.

[[358249]]

1. What is the architecture of a computer network?

In general, computer network concepts are usually divided into 5 main layers. This is a combination of the OSI (Open Systems Interconnection) model and the TCP/IP model, which have 7 layers and 4 layers respectively.

> Different Models of Computer Networks (Image by Author)

2. What is the function of each layer?

The following introduction will be based on the OSI network model:

  • Application layer: The task of the application layer is to specify the communication protocol or interface between the application processes running on the host. There are some common protocols in the application layer, such as HTTP (Hypertext Transfer Protocol), DNS (Domain Name System), and SMTP.
  • Presentation Layer: This layer is mainly responsible for converting the data coming from the application layer into the required format (e.g. ASCII). Some things like data encryption/decryption and compression are done here.
  • Session Layer: This layer is responsible for establishing and maintaining a connection or session between two processes. It also allows processes to add checkpoints for synchronization.
  • Transport layer: It provides end-to-end data (segment) transmission services between applications through the network. The most famous protocols of the transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
  • Network layer: The network layer is responsible for the routing of packets (data blocks). Specifically, the network layer selects appropriate transmission paths and sends and receives IP (Internet Protocol) packets from other networks.
  • Data Link Layer: This layer encapsulates IP packets from the network layer into frames and sends them through the link nodes. Frame transmission depends on the MAC (Message Access Control) address. The MAC address of the receiver can be obtained by sending an ARP (Address Resolution Protocol) request to see if there is any node with the required IP address.
  • Physical layer: Responsible for the transmission of bits between nodes, this is the physical connection (via physical data links) and eliminates differences between devices as much as possible.

3. What are TCP and UDP in the transport layer? What is the difference between them?

TCP (Transmission Control Protocol) is a connection-oriented service, which means it establishes a connection before transmitting data and closes the connection after the transmission.

The reliability of TCP is reflected in the establishment of a connection through a three-way handshake, as well as some mechanisms such as error detection, flow control, congestion control, and retransmission. These functions will cost a lot of overhead and occupy processor resources.

TCP is commonly used for file transfer, sending and receiving mail, and remote login.

UDP (User Datagram Protocol) does not require a connection to be established before data transmission, which means that the remote host does not need to confirm after receiving the UDP segment.

Although UDP does not provide reliable transport, it is the most efficient service in some situations (usually instant messaging), such as real-time audio and video streaming.

> TCP vs UDP (Image by Author)

4. How does TCP establish and terminate connections?

Let's first look at how a TCP connection is established in the client/server model, which is often called a three-way handshake:

  • Client: It sends a SYN segment which requests the server to synchronize its sequence number with that of the client.
  • Server: After receiving a packet from the client, the server returns a SYN and ACK segment, which informs the client that the packet has been received and asks it to provide the expected sequence number for acknowledgment.
  • Client: It sends back a packet with an ACK segment which informs the server that the returning packet has been received correctly.

> TCP Connection Establishment (Image by Author)

The SYN segment confirms that there is no problem with the route from the sender to the receiver, but the route from the receiver to the sender should be confirmed by the ACK segment.

Next, we will discuss how TCP terminates the connection in the "client/server" model, which is a four-way handshake process:

  • Client: After deciding to close the connection, the client will send a FIN segment to the server. The client will then enter the FIN_WAIT_1 state, waiting for confirmation from the server.
  • Server: Once it receives the FIN segment from the client, it sends back an ACK segment.
  • Client: After receiving the ACK segment from the server, it enters the FIN_WAIT_2 state, and the server is waiting for the FIN segment sent by the other end.
  • Server: It also closes the connection with the client and sends a FIN segment after sending the ACK segment.
  • Client: After receiving the ACK segment from the server, the client will send back the final ACK segment for confirmation. After that, the client will enter the TIME_WAIT state. If the other end does not receive the final ACK segment, the client will be officially closed after a period of time.

5. What is ARQ (Automatic Repeat Request)?

ARQ is an error control method used for data transmission in the transport layer and data link layer.

Acknowledgements and timeouts are used to ensure reliable data transmission. If the receiver does not receive an acknowledgment within a given time, it will resend the same packet until an acknowledgment is returned or a predefined retransmission time has expired.

There are two types of ARQ including:

  • Stop and Wait ARQ: The basic idea of ​​stop and wait ARQ is that the sender stops data transmission after each data packet is sent. If the acknowledgment is not received from the receiver after a given time, the transmission is considered failed. This means that the data should be retransmitted until the acknowledgment is received. If the receiver receives a duplicate data packet, it should discard this packet and send back an acknowledgment at the same time.
  • Go-Back-N ARQ: The sender maintains a sliding window in which packets can be sent continuously without waiting for acknowledgment. The receiver usually receives only in-order packets and sends back a cumulative ACK after the last packet arrives.

6. How does TCP implement flow control?

The purpose of flow control is to control the speed at which packets are sent to ensure that the receiver can receive it in time.

TCP can implement flow control through a sliding window. The size of the sender's sliding window can be controlled by the ACK segment returned by the receiver, which may also affect the sending speed.

7. How does TCP implement congestion control?

Network congestion occurs when requests for network resources exceed the amount of data they can handle.

Congestion control is to prevent too much data from being injected into the network so that the network links or nodes are not overloaded.

TCP congestion control uses a variety of strategies, including:

  • Slow start: Instead of absorbing a large amount of data into the network, TCP sends a small amount of data at first and gradually increases the congestion window (cwnd) size exponentially after each RTT (round trip time).
  • Congestion Avoidance: After the congestion window (cwnd) size reaches a threshold, it starts increasing incrementally to avoid network congestion.
  • Congestion Detection: This occurs when congestion occurs, the congestion window size will be reduced exponentially. It is assumed that congestion occurs when packets need to be resent.
  • Fast Retransmit and Recovery (FRR): This is a congestion control algorithm that can quickly recover from lost packets. Without FRR, TCP will pause the transmission by a timer. During the pause, no new packets will be transmitted. With FRR, if the receiver receives a segment, it will immediately return a duplicate ACK segment. The sender will assume that the segment is lost after receiving three duplicate ACK segments. FRR reduces the delay of retransmission.

8. What is the process from entering a URL to displaying a web page?

This process can be divided into several steps:

  • DNS resolution.
  • Establish a TCP connection.
  • Send an HTTP request.
  • The server processes the request and returns an HTTP response.
  • The browser renders the web page.
  • Tight connection.

> The Process of Accessing URL and Protocols Used (Image by Author)

8. How does HTTP save user status?

HTTP is a "stateless" protocol, meaning that it does not save state on the connection between requests and responses itself.

So how do we save user state?

Manage sessions to solve this problem. The main function of a session is to record user status from the server side.

For example, when you put some products into your Amazon shopping cart and consider buying them later. Since it is actually stateless, the system actually does not know who saved these items via HTTP. Therefore, the server will create and keep a specific session for you, so that your shopping information can be tracked.

9. What are cookies in computer networking? What is the difference between cookies and sessions?

Cookies and sessions can both track and store user identities, but they are generally used in different situations.

Cookies are often used to store user information. For example, after logging into a website, there is no need to log in again next time because our security details have been stored as a token in the cookie. The system only needs to look up the user based on the token value.

Sessions record user state through the server. A typical scenario using sessions is an online shopping cart. Since HTTP is stateless, the server will keep track of the user state by marking the user as a session.

Cookie data is stored on the client (browser), while session data is stored on the server side. This means that sessions have a higher level of security compared to cookies.

10. What is the difference between HTTP and HTTPS?

HTTP goes beyond the scope of TCP and uses plain text to transmit content. Neither the client nor the server can verify each other's identity.

HTTPS (Hypertext Transfer Protocol Secure) is HTTP running over SSL/TLS, which runs over TCP/IP. Everything that is transmitted is encrypted.

Therefore, HTTPS is more secure than HTTP, but HTTPS requires more resources than HTTP.

To be clear, there is still a lot to learn about computer networks. Since common technical interviews, especially for junior software engineers, often focus on the upper half of the networking layer, the questions we saw are only a small part of that area.

<<:  5G, how is the construction going?

>>:  Synchronized is the king's harem manager, and thread is the queen

Recommend

CMIVPS Seattle VPS (AS4837) simple test

We have previously shared information about CMIVP...

Do you really understand the connection control in Dubbo?

[[422543]] This article is reprinted from the WeC...

TCP retransmission problem troubleshooting ideas and practices

1. About TCP retransmission TCP retransmission is...

【Funny story】An attack launched by a network cable

Not long after I entered college, I encountered a...

HTTPS learning summary

Preface I've been reading about HTTP recently...

In-depth analysis of SDN switch configuration and application issues

SDN (Software Defined Networking) is an emerging ...