Communication styles in microservices architecture

Communication styles in microservices architecture

In a microservices architecture, communication is a key element, and there is extensive discussion about choosing the most effective method for service-to-service interaction. In this introductory article, we will explore and summarize the best communication strategies for microservices, providing insights on when and how to effectively use each communication style.

Interaction Style

To effectively understand how services communicate in a microservices architecture, you must first become familiar with the available interaction styles. Each style has its own unique advantages and disadvantages. A deep understanding of these nuances is critical to making an informed decision before selecting an appropriate communication mechanism. This foundational knowledge ensures that the chosen approach is a good fit with the specific requirements and challenges of your system.

Interaction style can be divided into two dimensions. The first dimension is whether the interaction is one-to-one or one-to-many:

  • One-to-one — Each client request is handled by one service.
  • One-to-Many — Each request is handled by multiple services.

The second dimension is whether the interaction is synchronous or asynchronous.

  • Synchronous — The client expects the service to respond promptly and may even block waiting.
  • Asynchronous — The client does not block and the response (if any) is not necessarily sent immediately.

The following table shows the different dimensions:

Communication Dimension

Let’s discuss each of these briefly.

One-to-one interaction:

  • Request/Response — A service client sends a request to a service and waits for a response. The client expects the response to arrive in a timely manner and may even block waiting. This style of interaction often results in tightly coupled services.
  • Asynchronous request/response — A service client sends a request to a service, and the service replies asynchronously. The client does not block waiting because the service may not send a response for a long time.
  • One-way notification — A service client sends a request to a service but does not expect or send a reply.

One-to-many interaction

  • Publish/Subscribe — Clients publish notification messages and interested services consume them.
  • Publish/Async Response — The client publishes a request message and then waits a period of time to receive a response from the service of interest.

Remember that a service can have multiple communication methods.

Communicate using synchronous remote procedure call mode

The client sends a request to the service, which processes the request and sends a response. Some clients may block waiting for a response, while others may have a reactive, non-blocking architecture. But unlike with messaging, the client assumes that the response will arrive in a timely manner.

The following diagram shows how RPI works. The client's business logic calls the proxy interface implemented by the RPI Proxy Adapter class. The RPI Proxy makes a request to the service.

The request is handled by the RPI Server Adapter class, which calls the service's business logic through the interface. It then sends a reply back to the RPI Proxy, which returns the result to the client's business logic.

The proxy interface usually encapsulates the underlying communication protocol. There are many protocols to choose from, and we focus on the most popular protocols REST and gRPC.

1. REST API

A key concept of REST is the resource, which typically represents a business object (such as a customer or product) or a set of business objects. REST uses HTTP verbs to operate on resources, which are referenced by URLs. For example, a GET request returns a representation of a resource, typically an XML document or a JSON object, although other formats (such as binary) can also be used. A POST request creates a new resource, and a PUT request updates a resource.

Challenges of REST API:

  • Fetching Multiple Resources in a Single Request REST resources typically focus on business objects, such as customers and orders, which poses a challenge to fetching multiple related objects in a single request. For example, fetching an order and its associated customers typically requires multiple API calls. A common workaround is to enhance the API so that clients can fetch related resources in a single call, such as using a GET request with an "expand" query parameter to specify related resources. While effective in many cases, this approach can be complex and time-consuming, which has led to the rise of alternative technologies like GraphQL for more streamlined data retrieval.
  • Mapping operations to HTTP verbs*A significant challenge in REST API design is how to assign specific operations on business objects to the correct HTTP verbs. For example, updating an order may involve various operations such as cancel or modify*, and not all updates meet the idempotence requirements of using the HTTP PUT method. A common approach is to create subresources for different update operations, such as using POST to cancel (POST /orders/{orderId}/cancel) or modify an order (POST /orders/{orderId}/revise). Another approach is to include the operation in the URL query parameters. However, these methods may not fully follow REST principles. This difficulty in mapping operations to HTTP verbs has led to the popularity of alternative technologies such as gRPC.

There are many advantages to using REST:

  • Simple and familiar.
  • HTTP APIs can be tested in a browser using a plugin such as Postman or in the command line using curl (assuming JSON or another text format is used).
  • Directly supports request/response style communication.
  • HTTP is of course firewall-friendly. • No intermediate proxy is required, which simplifies the system architecture.

There are also some disadvantages to using REST:

  • Only request/response style communication is supported.
  • Reduced availability. Since the client and service communicate directly, with no intermediary buffering messages, they must be operational for the entire exchange.
  • The client must know the location (URL) of the service instance. In modern applications, this is a non-trivial problem. The client must use a so-called service discovery mechanism to locate the service instance.
  • Fetching multiple resources in a single request is challenging. • Mapping multiple update operations to HTTP verbs is sometimes difficult.

2. Using gRPC

REST APIs often struggle with handling multiple update operations using limited HTTP verbs. gRPC offers an alternative by using a binary messaging protocol that emphasizes an API-first approach. It leverages Protocol Buffers (Protobuf), a language-neutral serialization system developed by Google that allows developers to define APIs using an Interface Definition Language (IDL) based on Protocol Buffers. This setup makes it possible to automatically generate client and server code for various programming languages ​​such as Java, C#, NodeJS, and GoLang using the Protocol Buffer compiler. The gRPC API runs on top of HTTP/2 and supports simple request/response and streaming RPCs, where a server can send a stream of messages to a client or vice versa. This technology supports the creation of clear service interfaces with strongly typed methods, providing a powerful framework for handling a variety of complex communication patterns in microservice architectures.

gRPC has several advantages:

  • Designing an API with rich update operations is quite simple.
  • It has an efficient and compact IPC mechanism, especially when exchanging large messages.
  • Bidirectional streams support both RPI and message passing style communications.
  • Enables interoperability of clients and services across a variety of programming languages.

gRPC also has some disadvantages:

  • JavaScript clients using the gRPC API require more work than using the REST/JSON API.
  • Older firewalls may not support HTTP/2.

gRPC is a powerful alternative to REST, but like REST, it is a synchronous communication mechanism and therefore subject to partial failure.

Communicate using asynchronous messaging pattern

When using messaging, services communicate by exchanging messages asynchronously. Messaging-based applications typically use a message broker, which acts as an intermediary between services. Service clients make requests to services by sending messages. If the service instance expects a reply, it replies to the client by sending a separate message. Because the communication is asynchronous, the client does not block waiting for a reply. Instead, the client assumes that the reply will not be received immediately.

1. Messaging Overview

According to the book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf:

Messages are exchanged over message channels. A sender (application or service) writes a message to a channel, and a receiver (application or service) reads a message from a channel. Let's look at messages first, then channels.

2. About the message

A message consists of a message header and a message body.

A message header is a set of name-value pairs, as well as metadata describing the data being sent. In addition to the name-value pairs provided by the sender of the message, the message header contains name-value pairs such as a unique message ID generated by the sender or the messaging infrastructure, and an optional return address that specifies the message channel to which replies should be written.

The message body is the data sent in text or binary format.

There are several different types of messages:

  • Document — A generic message containing only data. The recipient decides how to interpret it. A reply to a command is an example of a document message.
  • Command — Contains data that instructs the recipient to perform some action. A message sent by a client is an example of a command.
  • Event — contains data describing something that happened. Publish/subscribe messages are often an example of an event.

3. About the message channel

Messages are sent over message channels. Message channels are a key component of the messaging infrastructure. While a message is a logical concept, a message channel is a concrete, physical concept that is typically instantiated by a message broker. There are two types of message channels: point-to-point channels and publish-subscribe channels.

The following diagram shows how they work:

  • Point-to-point channels pass messages from one sender to one receiver. The message broker ensures that each message is consumed by exactly one receiver. This type of channel is suitable for sending commands and publishing single-consumer events. The message broker usually does this by placing messages in a queue.
  • Publish-Subscribe Channels deliver messages from one sender to multiple receivers. The message broker ensures that each message is consumed by all receivers. This type of channel is suitable for publishing events. Message brokers usually do this by putting messages into topics.

4. Advantages and disadvantages of messaging

There are several advantages to using messaging:

  • It is asynchronous and does not require both the client and the service to be running during the communication.
  • It enables you to implement publish/subscribe and publish/respond styles of communication.
  • It decouples clients and services. Clients request services by writing to a channel, and services provide services by reading messages from the channel. Clients and services do not communicate directly, so there is no need to know each other's location.
  • Clients can write requests to a virtual queue on a load balancer or message broker to achieve load balancing of service instances.
  • The message broker automatically sends messages to service instances, so messages are not lost if a service instance crashes.

There are some disadvantages to using message passing:

  • Increased complexity. When using messaging, you must write code to handle sending and receiving messages.
  • Complex debugging. Messaging introduces a new form of communication where you must track the state and flow of messages to debug the system.
  • Experiencing infrastructure overhead for delivering messages. The messaging infrastructure can introduce overhead that can cause performance degradation for certain messaging scenarios.
  • Debugging and testing the system becomes more complex when using messaging.

at last

The choice of microservice communication method depends on the specific needs and design considerations of the system. Synchronous methods such as REST and gRPC are suitable for scenarios that require timely responses, while asynchronous messaging excels in decoupling services and improving system reliability and scalability. Understanding the pros and cons of these methods and their applicable scenarios is the key to designing an efficient, scalable, and reliable microservice architecture. I hope this article can provide valuable guidance and reference for your choice of microservice communication method.

<<:  Wangsu Technology launches one-stop edge intelligence solution to lower AIGC application threshold

>>:  Tencent releases StarNet 2.0, increasing AI large model training efficiency by 20%

Recommend

Security Talk丨How far are we from 5G?

[[267324]] Security officials from governments ar...

The battle between local deployment and cloud-managed WLAN architecture

Enterprises that need to upgrade their traditiona...

Co-construction and sharing to protect and release the dividends of 5G

[[351365]] There has always been controversy in t...

A brief history of the development of the iSCSI storage protocol

iSCSI stands for Internet Small Computer System I...

The Secret History of IPv6

As we all know, IPv6 is a new technology that is ...

This article explains OSPF clearly.

[[426836]] OSPF OSPF is an IGP and a Link-State p...

Who "raped" my network? Most likely it was the operator who "played rogue"!

Since I became the editor of the wireless network...

PAM4 and Coherent Technology in 100G DWDM Optical Modules

[[385177]] 100G transmission in data centers is p...