This article is reprinted from the WeChat public account "Full Stack Coder Portrait", the author is Xiaomajia. Please contact the Full Stack Coder Portrait public account for reprinting this article. Modern applications rely on real-time data communication for many functions:
The core of HTTP communication has not changed, it is still a request/response model, which brings fundamental challenges to real-time communication. For years, developers have been trying to circumvent HTTP obstacles in various ways. We quickly summarize several popular techniques, each with a real-life anecdote to facilitate explanation. Periodic pollingHiking with kids? The children ask, "Are we there yet?" every 1 or 2 minutes, and your responses are brief and friendly, but the questions/responses continue. The client periodically asks the server if there is new information. Obviously this is not real time, but if the polling interval is short enough, it may have a slight effect. Periodic polling does result in repeated unnecessary round trips between the client and server. Long Polling CometTake another hiking trip with your kids. But this time, when your child asks, "Are we there yet?" you simply stay quiet and don't respond until the next stop (or a tantrum). Long polling is an advanced form of polling that meets the needs of real-time communication. The client makes a request for information to the server, and the server holds the request until something noteworthy happens (or the request is about to time out). At the same time, the client needs to program the response and timeout to immediately initiate another request. This ensures that the client/server has a continuous flow of Comet requests to receive real-time responses. Compared with polling, long polling significantly reduces the number of unnecessary http requests and saves resources. The disadvantage of long polling is that connection suspension will also lead to waste of resources. Long polling remains popular, but it usually requires custom programming on both the server and the client to implement successfully. Server-Sent Events (SSE)You shop on an e-commerce site and check the push notification box. You will receive marketing emails three times a day after that. SSE is a new feature of HTML5. The biggest feature of SSE is that it does not require the client to send a request. As long as the data on the server is updated, it can be sent to the client immediately. SSE is largely a directed push from the server to the client, and the client uses the EventSource object (HTML5 standard) to capture streaming notifications from the server. WebSocketsWhen you travel abroad for the first time, once you confirm the language with the other party, subsequent communication will be barrier-free. WebSockets relies on the persistent connection mechanism of http1.1. The WebSockets handshake phase requires http. Once the connection is established, the client and server are on an equal footing and can communicate in full-duplex mode. There is no distinction between requests and responses. The above technologies can solve the HTTP barrier and facilitate real-time communication. The problem is that most of these technologies require a lot of work from the developer. Wouldn’t it be nice if there were some framework that removed the complexity of communication so developers could focus on building real-time applications? SignalR is a mature real-time communication framework for the .NET technology stack. SignalR provides an API for bidirectional remote procedure calls (RPCs) between servers and clients, eliminating the complexity of real-time communication. SignalR provides a unified API canvas for connection and client management, and scales to handle increased traffic. SignalR uses the concept of server-side hubs to facilitate real-time communication and management of connected clients. Servers and clients can call methods on each other seamlessly, and this interaction method is strongly typed. While the text-based JSON format is used by default, SignalR also supports the Messagepack protocol - (binary data serialization/deserialization) for increased efficiency. gRPC HTTP/2, launched in 2015, focuses on security, data compression, better performance, and lower latency. gRPC is a high-performance general RPC framework developed by Google based on the HTTP/2 protocol. The multiplexing feature of HTTP/2 supports the streaming transmission capability of gRPC. Out of the box, gRPC provides rich features such as integrated authentication, bidirectional streaming, and flow control. gRPC automatically generates cross-platform client and server binding code for various languages and platforms. The format for gRPC service definition and information exchange is Protocol Buffers (a powerful binary serialization/deserialization toolset and language). https://www.techunits.com/topics/architecture-design/exclusive-comparison-between-websockets-and-grpc/ |
>>: China leads in 6G patent applications, satellite communication technology attracts attention
BandwagonHost has recently updated its discount c...
The Internet of Things is considered to be the th...
Recently, at the Second China Domain Name Develop...
Currently, the pace of cloud data center construc...
[Original article from 51CTO.com] Hello, my frien...
spinservers has launched a promotion for Double 1...
1. IPv6 Multicast Technology IP multicast is an e...
We have discussed the characteristics of HTTP and...
ExtraVM recently launched a new VPS host in the N...
At the "2017 China MEC Industry Development ...
Data packet sending process First, the green chat...
LuxVPS is a foreign hosting company founded in 20...
Prerequisites OSI architecture TCP/IP related pro...
At the end of last month, I just shared the news ...