K8s-Service Mesh Practice-Introduction to Istio

K8s-Service Mesh Practice-Introduction to Istio

background

Finally, we are entering the service mesh series that everyone is interested in. We have already explained:

  • How to deploy applications to Kubernetes
  • How to call services
  • How to access our services via domain name
  • How to use Kubernetes' built-in configuration ConfigMap

Basically, it is enough for us to develop a normal-scale web application; however, in enterprises, there are often complex application call relationships, and requests between applications also need to be managed, such as common current limiting, degradation, tracing, monitoring, load balancing and other functions.

Before we used Kubernetes, these problems were often solved by microservice frameworks, such as Dubbo and SpringCloud, which had corresponding functions.

But when we use Kubernetes, these things should be handed over to a dedicated cloud-native component to solve, which is Istio, which we will talk about this time. It is the most widely used service mesh solution.

picture

The official explanation of Istio is relatively concise, and the specific functional points are just mentioned:

  • Current Limitation and Degradation
  • Routing forwarding, load balancing
  • Entry gateway, TLS security authentication
  • Grayscale release, etc.

picture

Combined with the official architecture diagram, we can see that Istio is divided into the control plane and the data plane.

The control plane can be understood as the management function of Istio itself:

  • For example, service registration discovery
  • Manage the network rules required for configuring the data plane, etc.

The data plane can be simply understood as our business application represented by Envoy. All traffic in and out of our application will pass through the Envoy proxy.

Therefore, it can realize functions such as load balancing, fuse protection, authentication and authorization, etc.

Install

First install the Istio command line tool

The premise here is to have a kubernetes operating environment

Linux uses:

 curl -L https://istio.io/downloadIstio | sh -

Mac can use brew:

 brew install istioctl

For other environments, you can download Istio and configure environment variables:

 export PATH=$PWD/bin:$PATH

We can then install the control plane using the install command.

The default setting here is the kubernetes cluster configured by kubectl.

 istioctl install --set profile=demo -y

picture

This profile has the following different values. For demonstration purposes, we will use demo.

picture

use

 # 开启default 命名空间自动注入$ k label namespace default istio-injectinotallow=enabled $ k describe ns default Name: default Labels: istio-injectinotallow=enabled kubernetes.io/metadata.name=default Annotations: <none> Status: Active No resource quota. No LimitRange resource.

We then label the namespace so that the Istio control plane knows which namespace’s Pod will automatically have the sidecar injected.

Here we enable automatic injection of sidecar for the default namespace, and then deploy the deployment-istio.yaml we used earlier.

 $ k apply -f deployment/deployment-istio.yaml $ k get pod NAME READY STATUS RESTARTS k8s-combat-service-5bfd78856f-8zjjf 2/2 Running 0 k8s-combat-service-5bfd78856f-mblqd 2/2 Running 0 k8s-combat-service-5bfd78856f-wlc8z 2/2 Running 0

At this point, you will see that each Pod has two containers (one of which is the istio-proxy sidecar), which is the code used in the previous gRPC load balancing test.

picture

We still conduct the load balancing test and the results are the same, which shows that Istio works.

When we look at the sidecar log again, we can see the traffic we just sent and received:

 $ k logs -f k8s-combat-service-5bfd78856f-wlc8z -c istio-proxy [2023-10-31T14:52:14.279Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 - via_upstream - "-" 12 61 14 9 "-" "grpc-go/1.58.3" "6d293d32-af96-9f87-a8e4-6665632f7236" "k8s-combat-service:50051" "172.17.0.9:50051" inbound|50051|| 127.0.0.6:42051 172.17.0.9:50051 172.17.0.9:40804 outbound_.50051_._.k8s-combat-service.default.svc.cluster.local default [2023-10-31T14:52:14.246Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 - via_upstream - "-" 12 61 58 39 "-" "grpc-go/1.58.3" "6d293d32-af96-9f87-a8e4-6665632f7236" "k8s-combat-service:50051" "172.17.0.9:50051" outbound|50051||k8s-combat-service.default.svc.cluster.local 172.17.0.9:40804 10.101.204.13:50051 172.17.0.9:54012 - default [2023-10-31T14:52:15.659Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 - via_upstream - "-" 12 61 35 34 "-" "grpc-go/1.58.3" "ed8ab4f2-384d-98da-81b7-d4466eaf0207" "k8s-combat-service:50051" "172.17.0.10:50051" outbound|50051||k8s-combat-service.default.svc.cluster.local 172.17.0.9:39800 10.101.204.13:50051 172.17.0.9:54012 - default [2023-10-31T14:52:16.524Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 - via_upstream - "-" 12 61 28 26 "-" "grpc-go/1.58.3" "67a22028-dfb3-92ca-aa23-573660b30dd4" "k8s-combat-service:50051" "172.17.0.8:50051" outbound|50051||k8s-combat-service.default.svc.cluster.local 172.17.0.9:44580 10.101.204.13:50051 172.17.0.9:54012 - default [2023-10-31T14:52:16.680Z] "POST /helloworld.Greeter/SayHello HTTP/2" 200 - via_upstream - "-" 12 61 2 2 "-" "grpc-go/1.58.3" "b4761d9f-7e4c-9f2c-b06f-64a028faa5bc" "k8s-combat-service:50051" "172.17.0.10:50051" outbound|50051||k8s-combat-service.default.svc.cluster.local 172.17.0.9:39800 10.101.204.13:50051 172.17.0.9:54012 - default

Summarize

The content of this issue is relatively simple and is mainly related to installation and configuration. The next issue will update how to configure timeout, current limiting and other functions of internal service calls.

In fact, most of the current operations are more O&M-oriented. Even subsequent functions such as timeout configuration only require writing YAML resources.

However, when used in production, we will provide developers with a visual management console page, allowing them to flexibly configure these functions that originally needed to be configured in YAML.

picture

In fact, all major cloud platform vendors provide similar capabilities, such as Alibaba Cloud's EDAS.

All source code of this article can be accessed here: https://github.com/crossoverJie/k8s-combat

<<:  Introduction to Socks5 Proxy Protocol

>>:  Content Delivery Network (CDN) System Design

Recommend

Design and analysis of weak current intelligent system in intelligent building

The intelligentization of weak-current electricit...

An article to help you understand the concept of TCP/IP

1. What is TCP/IP? Transmission Control Protocol/...

A review of SDWAN's martial arts schools in 2018

There is no shortage of newcomers in the network ...

5G network deployment brings both opportunities and challenges

5G (or 5th generation mobile networks) deployment...

A complete guide to using Go language built-in packages!

Introduction to Commonly Used Built-in Packages i...

Security Theory: Learn how to respond to cyber terrorism

[51CTO.com Quick Translation] There has been a lo...

We will bear the consequences of irresponsible criticism of operators.

There was a problem with the telecom broadband at...

Why does TCP use three-way handshake? Can't two or four-way handshakes work?

The TCP protocol needs three handshakes to establ...

AT&T confirms testing of 400Gbps Ethernet connection

AT&T said it tested 400 Gbps Ethernet (400GbE...

IPv6 is finally going to replace IPv4, are you ready?

What to do on the weekend? The weather is so cold...