[[412862]] Image from Baotu.com Today I will discuss API Gateway with you. In the microservice architecture, API Gateway is a common architectural design pattern. The following are common problems in microservices that require the introduction of API gateways to help solve: - The granularity of the API provided by a microservice is usually different from the granularity required by the client. Microservices usually provide fine-grained APIs, which means that the client needs to interact with multiple services. For example, as mentioned above, a client who needs product details needs to obtain data from numerous services.
- Different clients require different data. For example, the desktop browser version of a product details page is usually more detailed than the mobile version.
- Network performance is different for different types of clients. For example, mobile networks are generally much slower and have higher latency than non-mobile networks. And, of course, any WAN is much slower than a LAN.
- This means that the performance of the network used by native mobile clients is very different from the performance of the LAN used by server-side web applications. Server-side web applications can make multiple requests to backend services without affecting the user experience, while mobile clients can only serve a few requests.
- The number of microservice instances and their locations (host + port) change dynamically.
- Service partitioning can change over time and should be hidden from clients.
- Services may use a variety of protocols, some of which may not be network-friendly.
Common API gateways mainly provide the following functions: - Reverse proxy and routing: The main reason why most projects adopt gateway solutions. It provides a single entry point for all clients accessing the backend API and hides the details of internal service deployment.
- Load balancing: A gateway can route a single incoming request to multiple backend destinations.
- Authentication and Authorization: The gateway should be able to successfully authenticate and allow only trusted clients to access the API, and also be able to authorize using methods like RBAC.
- IP List Whitelist/Blacklist: Allow or block certain IP addresses.
- Performance Analysis: Provides a way to record usage and other useful metrics associated with API calls.
- Rate limiting and flow control: The ability to control API calls.
- Request Transformation: Ability to transform requests and responses (including Headers and Bodies) before forwarding them further.
- Versioning: Using different versions of the API options simultaneously or perhaps providing a slow rollout of the API in the form of canary releases or blue/green deployments.
- Circuit Breaker: Microservices architecture pattern is useful to avoid outages in usage.
- Multi-protocol support: WebSocket/GRPC.
- Caching: Reduces network bandwidth and round-trip time consumption. If frequently requested data can be cached, performance and response time can be improved.
- API documentation: If you plan to expose your API to developers outside your organization, you must consider using API documentation, such as Swagger or OpenAPI.
There are many open source software that can provide API gateway support. Let's take a look at their respective architectures and functions. To verify the basic functionality of these open source gateways, I created some code and used OpenAPI to generate four basic API services, including Golang, Nodejs, Python Flask and Java Spring. The API uses a common pet store example, declared as follows: - openapi: "3.0.0"
- info:
- version: 1.0.0
- title: Swagger Petstore
- license:
- name : MIT
- servers:
- - url: http://petstore.swagger.io/v1
- paths:
- /pets:
- get:
- summary: List all pets
- operationId: listPets
- tags:
- - pets
- parameters:
- - name : limit
- in : query
- description: How many items to return at one time ( max 100)
- required: false
- schema :
- type: integer
- format: int32
- responses:
- '200' :
- description: A paged array of pets
- headers:
- x- next :
- description: A link to the next page of responses
- schema :
- type: string
- content:
- application/json:
- schema :
- $ref: "#/components/schemas/Pets"
- default :
- description: unexpected error
- content:
- application/json:
- schema :
- $ref: "#/components/schemas/Error"
- post:
- summary: Create a pet
- operationId: createPets
- tags:
- - pets
- responses:
- '201' :
- description: Null response
- default :
- description: unexpected error
- content:
- application/json:
- schema :
- $ref: "#/components/schemas/Error"
- /pets/{petId}:
- get:
- summary: Info for a specific pet
- operationId: showPetById
- tags:
- - pets
- parameters:
- - name : petId
- in : path
- required: true
- description: The id of the pet to retrieve
- schema :
- type: string
- responses:
- '200' :
- description: Expected response to a valid request
- content:
- application/json:
- schema :
- $ref: "#/components/schemas/Pet"
- default :
- description: unexpected error
- content:
- application/json:
- schema :
- $ref: "#/components/schemas/Error"
- components:
- schemas:
- Pet:
- type: object
- required:
- -id
- - name
- properties:
- id:
- type: integer
- format: int64
- name :
- type: string
- tag:
- type: string
- Pets:
- type: array
- Items:
- $ref: "#/components/schemas/Pet"
- Error:
- type: object
- required:
- - code
- - message
- properties:
- code:
- type: integer
- format: int32
- message:
- type: string
The constructed web service is deployed in a containerized manner through Docker Compose. - version: "3.7"
- services:
- goapi:
- container_name: goapi
- image: naughtytao/goapi:0.1
- ports:
- - "18000:8080"
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
- nodeapi:
- container_name: nodeapi
- image: naughtytao/nodeapi:0.1
- ports:
- - "18001:8080"
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
- flaskapi:
- container_name: flaskapi
- image: naughtytao/flaskapi:0.1
- ports:
- - "18002:8080"
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
- springapi:
- container_name: springapi
- image: naughtytao/springapi:0.1
- ports:
- - "18003:8080"
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
While learning about these open source gateway architectures, we will also verify their most basic routing and forwarding functions. Here, the request server/service_name/v1/ sent by the user will be sent to the API gateway, and the gateway will route to different backend services based on the service name. We used K6 to run 1000 tests with 100 concurrent connections. The results are shown in the figure above. We can see that the comprehensive response of the direct connection is about 1100+ requests per second. Nginx Nginx is an asynchronous framework web server that can also be used as a reverse proxy, load balancer, and HTTP cache. The software was created by Igor Sysoev and first publicly released in 2004. A company of the same name was established in 2011 to provide support. On March 11, 2019, Nginx was acquired by F5 Networks for $670 million. Nginx has the following features: - Written in C, it has low resource and memory usage and high performance.
- Single process and multiple threads. When the Nginx server is started, a master process will be generated. The master process will fork out multiple worker processes, and the worker threads will handle client requests.
- Supports reverse proxy and layer 7 load balancing (expanding the benefits of load balancing).
- High concurrency, Nginx is an asynchronous non-blocking request processing type, using the epollandqueue mode.
- Fast processing of static files.
- Highly modular and easy to configure. The community is active and various high-performance modules are produced quickly.
As shown in the figure above, Nginx mainly consists of three parts: Master, Worker and Proxy Cache. Master: NGINX follows a master-slave architecture. It will assign work to Workers based on customer requirements. After assigning the work to the Worker, the Master will look for the next request from the client as it will not wait for the response from the Worker. Once the response comes from the Worker, the Master will send the response to the client. Worker: Worker is the slave in the NGINX architecture. Each worker can process more than 1,000 requests at a time in a single-threaded manner. Once the processing is completed, the response will be sent to the main server. Single threading will save the size of RAM and ROM by working on the same memory space instead of different memory spaces. Multithreading will work on different memory spaces. Cache: Nginx cache is used to render pages very quickly by fetching from cache instead of from the server. On the first page request, the page will be stored in the cache. To implement API routing forwarding, you only need to configure Nginx as follows: - server {
- listen 80 default_server;
-
- location /goapi {
- rewrite ^/goapi(.*) $1 break;
- proxy_pass http://goapi:8080;
- }
-
- location /nodeapi {
- rewrite ^/nodeapi(.*) $1 break;
- proxy_pass http://nodeapi:8080;
- }
-
- location /flaskapi {
- rewrite ^/flaskapi(.*) $1 break;
- proxy_pass http://flaskapi:8080;
- }
-
- location /springapi {
- rewrite ^/springapi(.*) $1 break;
- proxy_pass http://springapi:8080;
- }
- }
We configure a route based on different services goapi, nodeapi, flaskapi and springapi respectively. Before forwarding, we need to use rewrite to remove the service name and send it to the corresponding service. Use containers to deploy Nginx and the four backend services in the same network, and connect the routing forwarding through the gateway. The deployment of Nginx is as follows: - version: "3.7"
- services:
- web:
- container_name: nginx
- image: nginx
- volumes:
- - ./templates:/etc/nginx/templates
- - ./conf/ default .conf:/etc/nginx/conf.d/ default .conf
- ports:
- - "8080:80"
- environment:
- - NGINX_HOST=localhost
- - NGINX_PORT=80
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
The test results of K6 through Nginx gateway are as follows: The number of requests processed per second is 1093, which is very close to that not forwarded through the gateway. From a functional perspective, Nginx can meet most of the user's needs for API gateways, can support different functions through configuration and plug-ins, and has excellent performance. The disadvantage is that there is no management UI and management API, and most of the work needs to be done by manually configuring the config file. The commercial version will have more complete functions. Kong Kong is an open source API gateway based on NGINX and OpenResty. The overall infrastructure of Kong consists of three main parts: NGINX provides the protocol implementation and worker process management, and OpenResty provides Lua integration and hooks into NGINX's request processing phase. Kong itself uses these hooks to route and transform requests. The database supports Cassandra or Postgres to store all configurations. Kong comes with a variety of plugins that provide features such as access control, security, caching, and documentation. It also allows custom plugins to be written and used using the Lua language. Kong can also be deployed as a Kubernetes Ingress and supports GRPC and WebSockets proxies. NGINX provides a powerful HTTP server infrastructure. It handles HTTP request processing, TLS encryption, request logging, and operating system resource allocation (for example, listening for and managing client connections and spawning new processes). NGINX has a declarative configuration file that resides in the file system of its host operating system. While some Kong functionality is possible with NGINX configuration alone (for example, determining upstream request routing based on the requested URL), modifying that configuration requires a certain level of operating system access rights to edit configuration files and require NGINX to reload them. Kong allows users to do the following: Update configuration via a RESTful HTTP API. Kong's NGINX configuration is fairly basic: apart from configuring standard headers, listening ports, and log paths, most of the configuration is delegated to OpenResty. In some cases, it is useful to add your own NGINX configuration alongside Kong, such as serving a static website alongside an API Gateway. In this case, you can modify the configuration template that Kong uses. Requests processed by NGINX go through a series of phases. Many NGINX features (for example, modules written in C) provide access to these phases (for example, the ability to use gzip compression). Although you can write your own modules, you must recompile NGINX every time you add or update a module. To simplify the process of adding new features, Kong uses OpenResty. OpenResty is a software suite that bundles NGINX, a set of modules, LuaJIT, and a set of Lua libraries. Chief among them is the ngx_http_lua_module, an NGINX module that embeds Lua and provides Lua equivalents for most NGINX request phases. This effectively allows NGINX modules to be developed in Lua while maintaining high performance (LuaJIT is quite fast), and Kong uses it to provide its core configuration management and plugin management infrastructure. Kong provides a framework through its plugin architecture that can hook into the request phases described above. Starting from the example above, both the Key Auth and ACL plugins control whether the client (also known as the consumer) should be able to make requests. Each plugin defines its own access function in its handler, and that function executes kong.access() for each plugin enabled via a given route or service. The order of execution is determined by the priority value. If Key Auth has a priority of 1003 and ACL has a priority of 950, Kong will first execute the access function of Key Auth, and if it does not drop the request, it will execute the ACL, and then pass that ACL to the upstream proxy_pass. Since Kong’s request routing and handling configuration is controlled through its admin API, plugin configurations can be added and removed on the fly without editing the underlying NGINX configuration. Because Kong essentially provides a way to inject location blocks (defined via the API) and configuration into your APIs. They do this by assigning plugins, certificates, etc. to those APIs. We use the following configuration to deploy Kong into the container (omitting the deployment of the four microservices): - version: '3.7'
-
- volumes:
- kong_data: {}
-
- networks:
- kong-net:
- external: false
-
- services:
- kong:
- image: "${KONG_DOCKER_TAG:-kong:latest}"
- user : "${KONG_USER:-kong}"
- depends_on:
- -db
- environment:
- KONG_ADMIN_ACCESS_LOG: /dev/stdout
- KONG_ADMIN_ERROR_LOG: /dev/stderr
- KONG_ADMIN_LISTEN: '0.0.0.0:8001'
- KONG_CASSANDRA_CONTACT_POINTS: db
- KONG_DATABASE: postgres
- KONG_PG_DATABASE: ${KONG_PG_DATABASE:-kong}
- KONG_PG_HOST: db
- KONG_PG_USER: ${KONG_PG_USER:-kong}
- KONG_PROXY_ACCESS_LOG: /dev/stdout
- KONG_PROXY_ERROR_LOG: /dev/stderr
- KONG_PG_PASSWORD_FILE: /run/secrets/kong_postgres_password
- secrets:
- -kong_postgres_password
- networks:
- - kong-net
- ports:
- - "8080:8000/tcp"
- - "127.0.0.1:8001:8001/tcp"
- - "8443:8443/tcp"
- - "127.0.0.1:8444:8444/tcp"
- healthcheck:
- test: [ "CMD" , "kong" , "health" ]
- interval: 10s
- timeout: 10s
- retries: 10
- restart: on -failure
- deploy:
- restart_policy:
- condition: on -failure
-
- db:
- image: postgres:9.5
- environment:
- POSTGRES_DB: ${KONG_PG_DATABASE:-kong}
- POSTGRES_USER: ${KONG_PG_USER:-kong}
- POSTGRES_PASSWORD_FILE: /run/secrets/kong_postgres_password
- secrets:
- -kong_postgres_password
- healthcheck:
- test: [ "CMD" , "pg_isready" , "-U" , "${KONG_PG_USER:-kong}" ]
- interval: 30s
- timeout: 30s
- retries: 3
- restart: on -failure
- deploy:
- restart_policy:
- condition: on -failure
- stdin_open: true
- tty: true
- networks:
- - kong-net
- volumes:
- - kong_data:/var/lib/postgresql/data
- secrets:
- kong_postgres_password:
- file: ./POSTGRES_PASSWORD
We chose PostgreSQL as the database. The open source version does not have a dashboard. We use RestAPI to create all gateway routes: - curl -i -X POST http://localhost:8001/services \
-
-
- curl -i -X POST http://localhost:8001/services/goapi/routes \
-
-
You need to create a service first, and then create a route under the service. The results of the K6 stress test are as follows: The number of requests per second (705) is significantly lower than Nginx, so all functions have costs. APISIX Apache APISIX is a dynamic, real-time, high-performance API gateway that provides rich traffic management functions such as load balancing, dynamic upstream, grayscale release, service circuit breaking, identity authentication, observability, etc. APISIX was created by China's Zhiliu Technology in April 2019, open sourced in June, and entered the Apache Incubator in October of the same year. The corresponding commercial product of Zhiliu Technology is called API7. APISIX is designed to handle a large number of requests and has a low threshold for secondary development. The main functions and features of APISIX are: - Cloud-native design, lightweight and easy to containerize.
- Integrates with statistics and monitoring components such as Prometheus, Apache Skywalking, and Zipkin.
- Supports proxy protocols such as gRPC, Dubbo, WebSocket, MQTT, and protocol transcoding from HTTP to gRPC to adapt to various situations.
- Acts as an OpenID relying party, connecting with services from Auth0, Okta, and other authentication providers.
- Supports serverless by dynamically executing user functions at runtime, making the gateway's edge nodes more flexible.
- Support hot loading of plugins.
- It does not lock in users and supports hybrid cloud deployment architecture.
- The gateway node is stateless and can be flexibly expanded.
From this perspective, the API gateway can replace Nginx to handle north-south traffic, and can also play the role of Istio control plane and Envoy data plane to handle east-west traffic. The architecture of APISIX is shown in the figure below: APISIX consists of a data plane for dynamically controlling request traffic, a control plane for storing and synchronizing gateway data configuration, an AI plane for coordinating plug-ins, and real-time analysis and processing of request traffic. It is built on top of the Nginx reverse proxy server and the key-value store etcd to provide a lightweight gateway. It is written primarily in Lua, a programming language similar to Python. It uses a Radix tree for routing and a prefix tree for IP matching. Using etcd instead of a relational database to store configuration makes it closer to cloud native, but also ensures the availability of the entire gateway system even in the event of any server downtime. All components are written as plugins, so its modular design means that feature developers only need to worry about their own projects. Built-in plugins include flow control and rate limiting, authentication, request rewriting, URI redirection, open tracing, and serverless. APISIX supports OpenResty and Tengine runtime environments, and can run on bare metal in Kubernetes. It supports both X86 and ARM64. We also use Docker Compose to deploy APISIX: - version: "3.7"
-
- services:
- apisix-dashboard:
- image: apache/apisix-dashboard:2.4
- restart: always
- volumes:
- - ./dashboard_conf/conf.yaml:/usr/ local /apisix-dashboard/conf/conf.yaml
- ports:
- - "9000:9000"
- networks:
- apisix:
- ipv4_address: 172.18.5.18
-
- apisix:
- image: apache/apisix:2.3-alpine
- restart: always
- volumes:
- - ./apisix_log:/usr/ local /apisix/logs
- - ./apisix_conf/config.yaml:/usr/ local /apisix/conf/config.yaml:ro
- depends_on:
- - etcd
- ##network_mode: host
- ports:
- - "8080:9080/tcp"
- - "9443:9443/tcp"
- networks:
- apisix:
- ipv4_address: 172.18.5.11
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
-
- etcd:
- image: bitnami/etcd:3.4.9
- user : root
- restart: always
- volumes:
- - ./etcd_data:/etcd_data
- environment:
- ETCD_DATA_DIR: /etcd_data
- ETCD_ENABLE_V2: "true"
- ALLOW_NONE_AUTHENTICATION: "yes"
- ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2379"
- ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379"
- ports:
- - "2379:2379/tcp"
- networks:
- apisix:
- ipv4_address: 172.18.5.10
-
- networks:
- apisix:
- driver: bridge
- ipam:
- config:
- - subnet: 172.18.0.0/16
The open source APISIX supports the dashboard approach to managing routes, instead of limiting the dashboard function to commercial versions like KONG. But the APISIX dashboard does not support rewriting the route URI, so we have to use RestAPI to create routes. The command to create a service route is as follows: - curl
-
-
-
- "uri" : "/goapi/*" ,
- "plugins" : {
- "proxy-rewrite" : {
- "regex_uri" : [ "^/goapi(.*)$" , "$1" ]
- }
- },
- "upstream" : {
- "type" : "roundrobin" ,
- "nodes" : {
- "goapi:8080" : 1
- }
- }
- }'
The results of the K6 stress test are as follows: APISix achieved a good score of 1155, showing a performance close to that without going through the gateway. It is possible that the cache played a good role. Tyk Tyk is an open source API gateway built on Golang and Redis. It was created in 2014, predating AWS's API Gateway as a Service feature. Tyk is written in Golang and uses Golang's own HTTP server. Tyk supports different ways of running: cloud, hybrid (GW in own infrastructure) and on-premises. Tyk consists of 3 components: - Gateway: A proxy that handles all application traffic.
- Dashboard: The interface from which Tyk can be managed, showing metrics and organizing the API.
- Pump: Responsible for persisting indicator data and exporting it to MongoDB (built-in), ElasticSearch or InfluxDB, etc.
We also use Docker Compose to create the Tyk gateway for functional verification. - version: '3.7'
- services:
- tyk-gateway:
- image: tykio/tyk-gateway:v3.1.1
- ports:
- - 8080:8080
- volumes:
- - ./tyk.standalone.conf:/opt/tyk-gateway/tyk.conf
- - ./apps:/opt/tyk-gateway/apps
- - ./middleware:/opt/tyk-gateway/middleware
- - ./certs:/opt/tyk-gateway/certs
- environment:
- -TYK_GW_SECRET=foo
- depends_on:
- - tyk-redis
- tyk-redis:
- image: redis:5.0-alpine
- ports:
- -6379:6379
Tyk's Dashboard is also a commercial version, so we once again need to use the API to create routes. Tyk creates and manages routes through the concept of apps, and you can also write app files directly. - curl
-
-
-
- "name" : "GO API" ,
- "slug" : "go-api" ,
- "api_id" : "goapi" ,
- "org_id" : "goapi" ,
- "use_keyless" : true ,
- "auth" : {
- "auth_header_name" : "Authorization"
- },
- "definition" : {
- "location" : "header" ,
- "key" : "x-api-version"
- },
- "version_data" : {
- "not_versioned" : true ,
- "versions" : {
- "Default" : {
- "name" : "Default" ,
- "use_extended_paths" : true
- }
- }
- },
- "proxy" : {
- "listen_path" : "/goapi/" ,
- "target_url" : "http://host.docker.internal:18000/" ,
- "strip_listen_path" : true
- },
- "active" : true
- }'
The results of the K6 stress test are as follows: The result of Tyk is around 400-600, which is close to KONG in performance. Zuul Zuul is Netflix's open source Java-based API gateway component. Zuul consists of multiple components: - zuul-core: This library contains the core functionality for compiling and executing filters.
- zuul-simple-webapp: This webapp shows a simple example of how to build an application using zuul-core.
- zuul-netflix: A library that adds other NetflixOSS components to Zuul, for example, routing requests using Ribbon.
- zuul-netflix-webapp: A webapp that packages zuul-core and zuul-netflix into one easy-to-use package.
Zuul provides flexibility and resiliency, in part by leveraging other Netflix OSS components: - Hystrix is used for flow control. Wrapping the call to the origin, this allows us to drop traffic if problems occur and prioritize traffic.
- Ribbon is a client for all outbound requests from Zuul, providing detailed information about network performance and errors, and handling software load balancing for even load distribution.
- Turbine aggregates fine-grained metrics in real time so that we can observe and react to problems quickly.
- Archaius handles configuration and provides the ability to change properties dynamically.
At the core of Zuul is a series of filters that can perform a series of actions during the routing of HTTP requests and responses. Following are the main features of Zuul filters: - Type: Typically defines the stage in the routing process at which the filter is applied. (Although it can be any custom string)
- Execution Order: Applied within a type, defines the execution order across multiple filters.
- Criteria: The conditions required for the filter to execute.
- Action: The action to be performed if the condition is met.
- class DeviceDelayFilter extends ZuulFilter {
-
- def static Random rand = new Random()
-
- @Override
- String filterType() {
- return 'pre'
- }
-
- @Override
- int filterOrder() {
- return 5
- }
-
- @Override
- boolean shouldFilter() {
- return RequestContext.getRequest().
- getParameter( "deviceType" )?equals( "BrokenDevice" ): false
- }
-
- @Override
- Object run() {
- sleep(rand.nextInt(20000)) // Sleep for a random number of
- // seconds between [0-20]
- }
- }
Zuul provides a framework to dynamically read, compile, and run these filters. Filters do not communicate directly with each other. Instead, the state is shared through a RequestContext that is unique to each request. Filters are written in Groovy. There are several standard filter types that correspond to the typical lifecycle of a request: - Pre filters are executed before routing to the origin. Examples include requesting authentication, selecting an origin server, and logging debug information.
- The Route filter handles routing the request to the origin. This is where the raw HTTP request is constructed and sent using Apache HttpClient or Netflix Ribbon.
- Post filters are executed after a request is routed to an origin. Examples include adding standard HTTP headers to the response, collecting statistics and metrics, and streaming the response from the origin to the client.
- The Error filter is executed when an error occurs in one of the other stages.
Spring Cloud creates an embedded Zuul proxy to simplify development of a very common use case where a UI application wants to proxy calls to one or more backend services. This feature is useful for user interfaces to proxy to required backend services, thus avoiding the need to manage CORS and authentication issues independently for all backends. To enable it, annotate a Spring Boot main class with @EnableZuulProxy , which will forward local calls to the appropriate service. Zuul is a Java library. It is not an out-of-the-box API gateway, so you need to develop an application using Zuul to test its functions. The corresponding Java POM is as follows: - <project
- xmlns= "http://maven.apache.org/POM/4.0.0"
- xmlns:xsi= "http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation= "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" >
- <modelVersion>4.0.0</modelVersion>
- <groupId>naughtytao.apigateway</groupId>
- <artifactId>demo</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <parent>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-parent</artifactId>
- <version>1.4.7.RELEASE</version>
- <relativePath />
- <!
- </parent>
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
- <java.version>1.8</java.version>
- <!
- <spring-cloud.version>Camden.SR7</spring-cloud.version>
- </properties>
- <dependencyManagement>
- <dependencies>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-dependencies</artifactId>
- <version>${spring-cloud.version}</version>
- <type>pom</type>
- <scope>import</scope>
- </dependency>
- </dependencies>
- </dependencyManagement>
- <dependencies>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-starter-zuul</artifactId>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-actuator</artifactId>
- <exclusions>
- <exclusion>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-logging</artifactId>
- </exclusion>
- </exclusions>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-log4j2</artifactId>
- </dependency>
-
- <!
- <!
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-security</artifactId>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-web</artifactId>
- </dependency>
- <!
- <dependency>
- <groupId>jakarta.xml.bind</groupId>
- <artifactId>jakarta.xml.bind-api</artifactId>
- <version>2.3.2</version>
- </dependency>
-
- <!
- <dependency>
- <groupId>org.glassfish.jaxb</groupId>
- <artifactId>jaxb-runtime</artifactId>
- <version>2.3.2</version>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-test</artifactId>
- <scope>test</scope>
- </dependency>
- <dependency>
- <groupId>org.junit.jupiter</groupId>
- <artifactId>junit-jupiter-api</artifactId>
- <version>5.0.0-M5</version>
- <scope>test</scope>
- </dependency>
- </dependencies>
- <build>
- <plugins>
- <plugin>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-maven-plugin</artifactId>
- </plugin>
- <plugin>
- <groupId>org.apache.maven.plugins</groupId>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.3</version>
- <configuration>
- <source>1.8</source>
- <target>1.8</target>
- </configuration>
- </plugin>
- </plugins>
- </build>
- </project>
The main application code is as follows: - package naughtytao.apigateway.demo;
-
- import org.springframework.boot.SpringApplication;
- import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
- import org.springframework.boot.autoconfigure.SpringBootApplication;
- import org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration;
- import org.springframework.cloud.netflix.zuul.EnableZuulProxy;
- import org.springframework.context.annotation.ComponentScan;
- import org.springframework.context.annotation.Bean;
-
- import naughtytao.apigateway.demo.filters.ErrorFilter;
- import naughtytao.apigateway.demo.filters.PostFilter;
- import naughtytao.apigateway.demo.filters.PreFilter;
- import naughtytao.apigateway.demo.filters.RouteFilter;
-
- @SpringBootApplication
- @EnableAutoConfiguration(exclude = { RabbitAutoConfiguration.class })
- @EnableZuulProxy
- @ComponentScan( "naughtytao.apigateway.demo" )
- public class DemoApplication {
-
- public static void main(String[] args) {
- SpringApplication.run(DemoApplication.class, args);
- }
- }
The Docker build file is as follows: - FROM maven:3.6.3-openjdk-11
- WORKDIR /usr/src/app
- COPY src ./src
- COPY pom.xml ./
- RUN mvn -f ./pom.xml clean package -Dmaven.wagon.http.ssl.insecure= true -Dmaven.wagon.http.ssl.allowall= true -Dmaven.wagon.http.ssl.ignore .validity.dates = true
-
- EXPOSE 8080
- ENTRYPOINT [ "java" , "-jar" , "/usr/src/app/target/demo-0.0.1-SNAPSHOT.jar" ]
The routing configuration is written in application.properties: - #Zuul routes.
- zuul.routes.goapi.url=http://goapi:8080
- zuul.routes.nodeapi.url=http://nodeapi:8080
- zuul.routes.flaskapi.url=http://flaskapi:8080
- zuul.routes.springapi.url=http://springapi:8080
-
- ribbon.eureka.enabled = false
- server.port=8080
We also use Docker Compose to run Zuul's gateway for verification: - version: '3.7'
- services:
- gateway:
- image: naughtytao/zuulgateway:0.1
- ports:
- - 8080:8080
- volumes:
- - ./config/application.properties:/usr/src/app/config/application.properties
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
The results of the K6 stress test are as follows: Under the same configuration conditions (single core, 256M), Zuul's stress test results are significantly worse than the others, only around 200. When more resources are allocated, Zuul's performance increases to 600-800 with 4 cores and 2G, so Zuul's demand for resources is still quite obvious. It is also worth mentioning that we are using Zuul 1, and Netflix has launched Zuul 2. Zuul 2 has made significant improvements to the architecture. Zuul1 is essentially a synchronous Servlet that uses a multi-threaded blocking model. Zuul2 implements an asynchronous non-blocking programming model based on Netty. The synchronous method is easier to debug, but multithreading itself consumes CPU and memory resources, so its performance is worse. Zuul, which uses non-blocking mode, has low thread overhead, supports more links, and saves more resources. Gravitee Gravitee is an open source, Java-based, easy-to-use, high-performance, and cost-effective open source API platform from Gravitee.io that helps organizations protect, publish, and analyze your APIs. Gravitee can create and manage APIs in two ways: Design Studio and Path: Gravity provides a gateway, an API portal, and API management. The gateway and management API parts are open source, and the portal requires a registered license to use. The backend uses MongoDB as storage and supports ES access. We also use Docker Compose to deploy the entire Gravitee stack: - #
- # Copyright (C) 2015 The Gravitee team (http://gravitee.io)
- #
- # Licensed under the Apache License, Version 2.0 (the "License" );
- # you may not use this file except in compliance with the License.
- # You may obtain a copy of the License at
- #
- # http://www.apache.org/licenses/LICENSE-2.0
- #
- # Unless required by applicable law or agreed to in writing, software
- # distributed under the License is distributed on an "AS IS" BASIS,
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- # See the License for the specific language governing permissions and
- # limitations under the License.
- #
- version: '3.7'
-
- networks:
- frontend:
- name : frontend
- Storage:
- name : storage
-
- volumes:
- data-elasticsearch:
- data-mongo:
-
- services:
- mongodb:
- image: mongo:${MONGODB_VERSION:-3.6}
- container_name: gio_apim_mongodb
- restart: always
- Volumes:
- - data-mongo:/data/db
- - ./logs/apim-mongodb:/var/log/mongodb
- networks:
- - storage
-
- elasticsearch:
- image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION:-7.7.0}
- container_name: gio_apim_elasticsearch
- restart: always
- Volumes:
- - data-elasticsearch:/usr/share/elasticsearch/data
- environment:
- - http.host=0.0.0.0
- - transport.host=0.0.0.0
- - xpack.security.enabled= false
- - xpack.monitoring.enabled= false
- - cluster. name =elasticsearch
- - bootstrap.memory_lock= true
- - discovery.type=single-node
- - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ulimits:
- memlock:
- soft: -1
- hard: -1
- nofile: 65536
- networks:
- - storage
-
- gateway:
- image: graviteeio/apim-gateway:${APIM_VERSION:-3}
- container_name: gio_apim_gateway
- restart: always
- Ports:
- - "8082:8082"
- depends_on:
- - mongodb
- - elasticsearch
- Volumes:
- - ./logs/apim-gateway:/opt/graviteeio-gateway/logs
- environment:
- - gravitee_management_mongodb_uri=mongodb://mongodb:27017/gravitee?serverSelectionTimeoutMS=5000&connectTimeoutMS=5000&socketTimeoutMS=5000
- - gravitee_ratelimit_mongodb_uri=mongodb://mongodb:27017/gravitee?serverSelectionTimeoutMS=5000&connectTimeoutMS=5000&socketTimeoutMS=5000
- - gravitee_reporters_elasticsearch_endpoints_0=http://elasticsearch:9200
- networks:
- - storage
- - frontend
- deploy:
- resources:
- limits:
- cpus: '1'
- memory: 256M
- reservations:
- memory: 256M
-
- management_api:
- image: graviteeio/apim-management-api:${APIM_VERSION:-3}
- container_name: gio_apim_management_api
- restart: always
- Ports:
- - "8083:8083"
- links:
- - mongodb
- - elasticsearch
- depends_on:
- - mongodb
- - elasticsearch
- Volumes:
- - ./logs/apim-management-api:/opt/graviteeio-management-api/logs
- environment:
- - gravitee_management_mongodb_uri=mongodb://mongodb:27017/gravitee?serverSelectionTimeoutMS=5000&connectTimeoutMS=5000&socketTimeoutMS=5000
- - gravitee_analytics_elasticsearch_endpoints_0=http://elasticsearch:9200
- networks:
- - storage
- - frontend
-
- management_ui:
- image: graviteeio/apim-management-ui:${APIM_VERSION:-3}
- container_name: gio_apim_management_ui
- restart: always
- Ports:
- - "8084:8080"
- depends_on:
- - management_api
- environment:
- - MGMT_API_URL=http://localhost:8083/management/organizations/ DEFAULT /environments/ DEFAULT /
- Volumes:
- - ./logs/apim-management-ui:/var/log/nginx
- networks:
- - frontend
-
- portal_ui:
- image: graviteeio/apim-portal-ui:${APIM_VERSION:-3}
- container_name: gio_apim_portal_ui
- restart: always
- Ports:
- - "8085:8080"
- depends_on:
- - management_api
- environment:
- - PORTAL_API_URL=http://localhost:8083/portal/environments/ DEFAULT
- Volumes:
- - ./logs/apim-portal-ui:/var/log/nginx
- networks:
- - frontend
We use the management UI to create four corresponding APIs to route the gateway. We can also use the API method. Gravitee is the only product in this open source gateway that has an open source management UI. The results of using the K6 stress test are as follows: Similar to Zuul, which also uses Java, Gravitee's response can only reach about 200, and there are some errors. We had to increase the gateway's resources allocation to 4 cores and 2G again. The performance after improving resource allocation has reached 500-700, slightly better than Zuul. Summarize This article analyzes the architecture and basic functions of several open source API gateways, and provides you with some basic reference information when selecting architectures. The test data in this article is relatively simple, and the scenarios are relatively single, and cannot be used as the basis for actual selection. Nginx: A high-performance API gateway developed based on C, with many plug-ins. If your API management needs are relatively simple and you accept manual routing, Nginx is a good choice. Kong: It is an API gateway based on Nginx, using OpenResty and Lua extensions, and uses PostgreSQL in the background. It has many functions and is very popular in the community, but it has considerable losses compared to Nginx in terms of performance. If you have requirements for functionality and scalability, you can consider Kong. APISIX: Similar to Kong's architecture, but adopts a cloud-native design and uses ETCD as the backend. It has considerable performance advantages over Kong, and is suitable for cloud-native deployment scenarios with high performance requirements. In particular, APISIX supports the MQTT protocol and is very friendly to building IoT applications. Tyk: Developed using Golang, use Redis in the background, and the performance is good. If you like Golang, you can consider it. It should be noted that Tyk's open source protocol is MPL, which is a code that cannot be closed source after modifying it, and is not very friendly to commercial applications. Zuul: It is a Java-based API gateway component of Netflix. It is not an out-of-the-box API gateway. It needs to be built with your Java application. All functions are used by integrating other components. It is suitable for applications that are more familiar with Java and built with Java. The disadvantage is that other open source products have poor performance. Under the same performance conditions, there will be more requirements for resources. Gravitee: It is an open source Java-based API management platform for Gravitee.io. It can manage the life cycle of the API. Even in the open source version, it has good UI support. However, because it uses Java construction, performance is also a shortcoming and is suitable for scenarios with strong demand for API management. All the codes in this article can be obtained from here: - https://github.com/gangtao/api-gateway
Author: Gang Tao Editor: Tao Jialong Source: zhuanlan.zhihu.com/p/358862217 |