Hello everyone, I am Xiaolin. Among Internet positions, back-end development is the most competitive and has the most applicants, but client-side development has very few applicants. Students who apply for the back-end position will be picked up by the client department for interviews. So if you can't get into the back-end position but want to join a big company, you can try applying for client-side development, where the interview is relatively less competitive and the salary and benefits are the same as those for the back-end position. Today I would like to share the interview experience of a Kuaishou client in the first, second and third rounds. The classmate's technology stack is C++ backend, but the interview will not ask about the backend content. The questions mainly revolve around Cpp+operating system+network protocol+algorithm. Compared with the backend, the content that needs to be prepared is less, such as mysql, redis, message queue and other backend components, but the depth of computing basics will be asked in a deeper level. Especially on the third page, it was still a bit difficult to write out the two scenario code questions by hand. KuaishouCongestion Control IntroductionWhen the network is congested, if a large number of data packets continue to be sent, it may cause data packet delays and losses. At this time, TCP will retransmit the data, but retransmission will cause a heavier burden on the network, resulting in greater delays and more packet losses. This situation will enter a vicious cycle and be continuously amplified.... Therefore, TCP cannot ignore what happens on the network. It is designed as a selfless protocol. When the network is congested, TCP will sacrifice itself and reduce the amount of data sent. So, there is congestion control, the purpose of which is to prevent the "sender"'s data from filling up the entire network. In order to regulate the amount of data to be sent on the "sender side", a concept called "congestion window" is defined. Congestion control mainly consists of four algorithms:
Slow StartAfter TCP has just established a connection, it first has a slow start process. This slow start means increasing the number of data packets sent little by little. If a large amount of data is sent right away, wouldn't this cause congestion to the network? The slow start algorithm only requires one rule to be remembered: each time the sender receives an ACK, the size of the congestion window cwnd increases by 1. Here it is assumed that the congestion window cwnd and the send window swnd are equal. Here is an example:
The change process of the slow start algorithm is as follows: Slow start algorithm It can be seen that with the slow start algorithm, the number of packets sent increases exponentially. So when will the slow start price stop? There is a state variable called slow start threshold ssthresh (slow start threshold).
Congestion AvoidanceWhen the congestion window cwnd "exceeds" the slow start threshold ssthresh, the congestion avoidance algorithm will be entered. Generally speaking, the size of ssthresh is 65535 bytes. Then after entering the congestion avoidance algorithm, its rule is: every time an ACK is received, cwnd increases by 1/cwnd. Continuing with the previous slow start example, let's assume that ssthresh is 8:
The change process of the congestion avoidance algorithm is as follows: Congestion Avoidance Therefore, we can find that the congestion avoidance algorithm changes the exponential growth of the original slow start algorithm into linear growth. It is still in the growth stage, but the growth rate is slower. As the traffic keeps growing, the network will gradually become congested, causing packet loss. At this time, the lost data packets will need to be retransmitted. When the retransmission mechanism is triggered, the "congestion occurrence algorithm" is entered.
When the network is congested, data packets will be retransmitted. There are two main retransmission mechanisms:
When a "timeout retransmission" occurs, the congestion occurrence algorithm will be used. At this time, the values of ssthresh and cwnd will change:
The changes in the congestion occurrence algorithm are shown in the following figure: Congested sending - timeout retransmission Then, we restart the slow start, which will suddenly reduce the data flow. This is really like returning to the pre-liberation era once the "timeout retransmission" occurs. However, this method is too radical and the reaction is too strong, which will cause network lag. There is a better way. We have talked about the "fast retransmit algorithm" before. When the receiver finds that an intermediate packet is lost, it sends the ACK of the previous packet three times, so the sender will retransmit quickly without waiting for a timeout. TCP considers this situation not serious because most of the packets are not lost, only a small part is lost. Then the changes of ssthresh and cwnd are as follows:
Fast retransmit and fast recovery algorithms are generally used at the same time. The fast recovery algorithm believes that if you can still receive 3 duplicate ACKs, it means that the network is not that bad, so there is no need to be as strong as RTO timeout. As mentioned before, before entering fast recovery, cwnd and ssthresh are updated:
Then, enter the fast recovery algorithm as follows:
The change process of the fast recovery algorithm is as follows: Fast retransmit and fast recovery In other words, the situation did not return to the pre-liberation era overnight like "timeout retransmission", but it remained at a relatively high value and subsequently increased linearly. What is the difference between http and https?
Https four encryption processes?The TLS handshake process based on the RSA algorithm is relatively easy to understand, so here we will use this to show you the TLS handshake process, as shown below: HTTPS connection establishment process Detailed process of establishing TLS protocol: 1. ClientHelloFirst, the client initiates an encrypted communication request to the server, which is a ClientHello request. In this step, the client mainly sends the following information to the server: (1) The TLS protocol version supported by the client, such as TLS 1.2. (2) The random number generated by the client (Client Random), which is later used to generate one of the "session key" conditions. (3) A list of cipher suites supported by the client, such as the RSA encryption algorithm. 2. SeverHelloAfter receiving the client's request, the server sends a response to the client, which is SeverHello. The server's response contains the following: (1) Confirm the TLS protocol version. If the browser does not support it, disable encrypted communication. (2) The random number generated by the server (Server Random) is also one of the conditions used to generate the "session key" later. (3) A list of confirmed cipher suites, such as the RSA encryption algorithm. (4) The server’s digital certificate. 3. Client responseAfter receiving the response from the server, the client first confirms the authenticity of the server's digital certificate through the CA public key in the browser or operating system. If there is no problem with the certificate, the client will extract the server's public key from the digital certificate, and then use it to encrypt the message and send the following information to the server: (1) A random number (pre-master key). This random number will be encrypted by the server public key. (2) Notification of change in encryption algorithm, indicating that all subsequent communications will be encrypted using the "session key". (3) Client handshake end notification, indicating that the client's handshake phase has ended. This item also summarizes the data of all previous contents for verification by the server. The random number in the first item above is the third random number in the entire handshake phase and will be sent to the server, so this random number is the same for both the client and the server. The server and the client have these three random numbers (Client Random, Server Random, pre-master key), and then use the encryption algorithm agreed upon by both parties to generate a "session key" for this communication. 4. Final response from the serverAfter the server receives the third random number (pre-master key) from the client, it calculates the "session key" for this communication through the negotiated encryption algorithm. Then, send the final message to the client: (1) Notification of a change in the encryption communication algorithm, indicating that all subsequent communications will be encrypted using the "session key". (2) Server handshake end notification, indicating that the server's handshake phase has ended. This item also summarizes the data of all previous contents for client verification. At this point, the entire TLS handshake phase is over. Next, the client and server enter into encrypted communication, using the normal HTTP protocol, but using the "session key" to encrypt the content. What is the difference between get and post?
What is the difference between a process and a thread?
What resources does a thread have and what is stored in the stack?Threads have some specific resources in the operating system, including:
The stack mainly stores the following contents:
When calling a function, what happens when the stack is pushed?When a function is called, the following stack operations are performed:
What is the difference between static link libraries and dynamic link libraries?
How to load dynamic link library into memory?By using mmap to map the library directly into the address space of each process, although each process believes that the library is loaded in its address space, there is actually only one copy of the library in the memory. In this way, mmap is magically linked to the dynamic link library. picture Introduction to virtual memory
What is an interrupt?In an operating system, an interrupt is an event triggered by hardware or software that temporarily stops the currently executing program and executes the interrupt-related handler instead. Interrupts can be internal, such as a division error or out-of-bounds access, or external, such as an input/output request from a hardware device or a clock interrupt. The role of interrupts is to provide a mechanism to handle and respond to various events, such as processing input/output requests from hardware devices, handling exceptions, performing clock scheduling, etc. When an interrupt occurs, the operating system determines the interrupt handler to be executed based on the interrupt type, and resumes the original program execution after handling the interrupt. The interrupt handler can perform a series of operations, such as saving the context of the current process, handling interrupt events, communicating with devices, scheduling other processes, etc. By using interrupts, the operating system can achieve multitasking, respond to external events in real time, and improve the reliability and stability of the system. The operating system's lock, implement a read-write lock yourselfReaders only read data and do not modify it, while writers can both read and modify data. Reader-Writer Problem Description:
Next, several solutions are proposed for analysis. Solution 1Use semaphore to try to solve the problem:
Next, let’s look at the implementation of the code: picture The above implementation is a reader-first strategy, because as long as there are readers reading, later readers can enter directly. If readers continue to enter, writers will be in a hungry state. Solution 2Since there is a reader-first strategy, there is also a writer-first strategy:
Based on Solution 1, the following variables are added:
The specific implementation code is as follows: Note that the role of rMutex here is that at the beginning, there are multiple readers reading data, and they all enter the reader queue. At this time, a writer comes and executes P(rMutex). Subsequent readers are blocked on rMutex and cannot enter the reader queue. When a writer arrives, they can all enter the writer queue, thus ensuring writer priority. At the same time, after the first writer executes P(rMutex), it cannot start writing immediately. It must wait until all readers entering the reader queue have completed the read operation and wake up the writer's write operation through V(wDataMutex). Option 3Since both the reader-first strategy and the writer-first strategy will cause starvation, let's implement a fair strategy. Fair strategy:
Specific code implementation: picture After reading the code, I wonder if you have such a question: Why does adding a semaphore flag achieve fair competition? Comparing with the reader priority strategy of Solution 1, it can be found that in reader priority, as long as there are readers arriving later, the readers can enter the reader queue, while the writers must wait until no readers arrive. If no readers arrive, the reader queue will be empty, that is, rCount==0, and the writer can enter the critical section to perform write operations. The role of the flag here is to prevent the reader from having this special permission (the special permission is that as long as the reader arrives, he can enter the reader queue). For example, at the beginning, some readers come to read data and they all enter the reader queue. At this time, a writer comes and executes the P(flag) operation, causing subsequent readers to be blocked on the flag and unable to enter the reader queue. This will gradually empty the reader queue, that is, rCount is reduced to 0. The writer cannot start writing immediately (because the reader queue is not empty at this time), and will be blocked on the semaphore wDataMutex. After all readers in the reader queue have finished reading, the last reader process executes V(wDataMutex) to wake up the previous writer, and the writer continues to start the write operation. Algorithm Questions
Kuaishou Second InterviewThe concept of pointer and reference value passing
What will cause the precision to be lost when int double string is forced to be converted?
What is void*?void* is a general pointer type, known as an "untyped pointer". It can be used to represent a pointer to any type, because a void* pointer does not specify a specific data type. Since void* is typeless, it cannot be directly dereferenced or pointer arithmetic. When using a void* pointer, it needs to be converted to a specific pointer type before it can be operated. Void* pointers are often used in situations where common operations between different types are required, such as passing pointer parameters of arbitrary types in functions or using them in dynamic memory allocation. How to convert void* to int* in malloc parameter list?You can use type conversion to convert a void* pointer to an int* pointer. The following is a sample code to convert a void* pointer to an int* pointer: In the above example, the malloc function is used to allocate the memory required to store an int type data and returns a void* pointer. Then, by converting the void* pointer to an int* pointer, it is assigned to the intPtr variable. Now, the int type data can be accessed and operated through the intPtr pointer. Algorithm Questions
Kuaishou three sides
The third interview did not ask any stereotyped questions, but directly asked two scenario code questions, which focused more on practical operation. It was too difficult. |
<<: TCP send window, receive window and how they work
>>: Smartpedia | What is a quantum network?
[[267345]] 5G has become a hot topic among people...
In the past decade of development of the communic...
spinservers has added a large number of dual E5+S...
Those who are interested in UK VPS can pay attent...
This month, we have shared RAKsmart's New Yea...
RepriseHosting is a low-cost US server provider f...
Software-defined WAN (SD-WAN), as the name implie...
Speaking of airport expressways, people often use...
Many people don’t consider the risks that smart d...
The Ministry of Industry and Information Technolo...
At present, it is still a good choice to visit ov...
[[380675]] CCTV News reported on February 5 that ...
The rollout of Wi-Fi 6 will consist of two waves ...
Nowadays, we use WIFI so many times every day tha...
On November 25, 2019, the RIPE NCC, which is resp...