Linkerd Canary Deployment and A/B Testing

Linkerd Canary Deployment and A/B Testing

[[413903]]

This guide shows you how to use Linkerd and Flagger to automate canary deployments and A/B testing.

Flagger Linkerd Traffic Split

Prerequisites

Flagger requires a Kubernetes cluster v1.16 or later and Linkerd 2.10 or later.

Install Linkerd the Prometheus (part of Linkerd Viz):

  1. linkerd install | kubectl apply -f -
  2. linkerd viz install | kubectl apply -f -

Install Flagger in the linkerd namespace:

  1. kubectl apply -k github.com/fluxcd/flagger//kustomize/linkerd

Bootloader

Flagger takes a Kubernetes deployment and optionally Horizontal Pod Autoscaling (HPA), and then creates a series of objects (Kubernetes Deployment, ClusterIP Service, and SMI Traffic Splitting). These objects expose the application to the mesh and drive Canary analysis and promotion.

Create a test namespace and enable Linkerd proxy injection:

  1. kubectl create ns test
  2. kubectl annotate namespace test linkerd.io/inject=enabled

Install a load testing service to generate traffic during canary analysis:

  1. kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main

Create a deployment and a horizontal pod autoscaler:

  1. kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main

Create a Canary custom resource for the podinfo deployment:

  1. apiVersion: flagger.app/v1beta1
  2. kind: Canary
  3. metadata:
  4. name : podinfo
  5. namespace: test
  6. spec:
  7. # deployment reference
  8. targetRef:
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. name : podinfo
  12. # HPA reference (optional)
  13. autoscalerRef:
  14. apiVersion: autoscaling/v2beta2
  15. kind: HorizontalPodAutoscaler
  16. name : podinfo
  17. # the maximum time   in seconds for the canary deployment
  18. # to make progress before it is   rollback ( default 600s)
  19. progressDeadlineSeconds: 60
  20. service:
  21. # ClusterIP port number
  22. port: 9898
  23. # container port number or   name (optional)
  24. targetPort: 9898
  25. analysis:
  26. # schedule interval ( default 60s)
  27. interval: 30s
  28. # max number of failed metric checks before rollback  
  29. threshold: 5
  30. # max traffic percentage routed to canary
  31. # percentage (0-100)
  32. maxWeight: 50
  33. # canary increment step
  34. # percentage (0-100)
  35. stepWeight: 5
  36. # Linkerd Prometheus checks
  37. metrics:
  38. - name : request-success-rate
  39. # minimum req success rate (non 5xx responses)
  40. # percentage (0-100)
  41. thresholdRange:
  42. min : 99
  43. interval: 1m
  44. - name : request-duration
  45. # maximum req duration P99
  46. # milliseconds
  47. thresholdRange:
  48. max : 500
  49. interval: 30s
  50. # testing (optional)
  51. webhooks:
  52. - name : acceptance-test
  53. type: pre-rollout
  54. url: http://flagger-loadtester.test/
  55. timeout: 30s
  56. metadata:
  57. type: bash
  58. cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"  
  59. - name : load -test
  60. type: rollout
  61. url: http://flagger-loadtester.test/
  62. metadata:
  63. cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/"  

Save the above resources as podinfo-canary.yaml and apply:

  1. kubectl apply -f ./podinfo-canary.yaml

When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The canary analysis will run for five minutes while validating HTTP metrics and rollout hooks every half minute.

After a few seconds, Flager will create the canary object:

  1. # applied
  2. deployment.apps/podinfo
  3. horizontalpodautoscaler.autoscaling/podinfo
  4. ingresses.extensions/podinfo
  5. canary.flagger.app/podinfo
  6.  
  7. # generated
  8. deployment.apps/podinfo- primary  
  9. horizontalpodautoscaler.autoscaling/podinfo- primary  
  10. service/podinfo
  11. service/podinfo-canary
  12. service/podinfo- primary  
  13. trafficsplits.split.smi-spec.io/podinfo

After boostrap, the podinfo deployment will be scaled to zero and traffic to podinfo.test will be routed to the primary pod. During canary analysis, the canary pod can be directly located using the podinfo-canary.test address.

Automatic canary advancement

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators such as HTTP request success rate, average request duration, and Pod health. Based on the analysis of the KPIs, the canary is promoted or aborted, and the analysis results are published to Slack.

Flagger Canary Phase

Trigger a canary deployment by updating the container image:

  1. kubectl -n test set image deployment/podinfo \
  2. podinfod=stefanprodan/podinfo:3.1.1

Flagger detects that the deployment revision has changed and starts a new deployment:

  1. kubectl -n test describe canary/podinfo
  2.  
  3. Status:
  4. Canary Weight: 0
  5. Failed Checks: 0
  6. Phase: Succeeded
  7. Events:
  8. New revision detected! Scaling up podinfo.test
  9. Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
  10. Pre-rollout check acceptance-test passed
  11. Advance podinfo.test canary weight 5
  12. Advance podinfo.test canary weight 10
  13. Advance podinfo.test canary weight 15
  14. Advance podinfo.test canary weight 20
  15. Advance podinfo.test canary weight 25
  16. Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available
  17. Advance podinfo.test canary weight 30
  18. Advance podinfo.test canary weight 35
  19. Advance podinfo.test canary weight 40
  20. Advance podinfo.test canary weight 45
  21. Advance podinfo.test canary weight 50
  22. Copying podinfo.test template spec to podinfo- primary .test
  23. Waiting for podinfo- primary .test rollout to finish: 1 of 2 updated replicas are available
  24. Promotion completed! Scaling down podinfo.test

Note that if you apply new changes to your deployment during canary analysis, Flagger will restart the analysis.

A canary deployment is triggered by a change to any of the following objects:

  • Deployment PodSpec (container image, command, ports, environment, resources, etc.)
  • ConfigMaps mounted as volumes or mapped to environment variables
  • Secrets mounted as volumes or mapped to environment variables

You can monitor all canaries via:

  1. watch kubectl get canaries --all-namespaces  
  2.  
  3. NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
  4. test podinfo Progressing 15 2019-06-30T14:05:07Z
  5. prod frontend Succeeded 0 2019-06-30T16:15:07Z
  6. prod backend Failed 0 2019-06-30T17:05:07Z

Automatic rollback

During canary analysis, you can generate HTTP 500 errors and high latency to test whether Flagger pauses and rolls back the faulty version.

Trigger another canary deployment:

  1. kubectl -n test set image deployment/podinfo \
  2. podinfod=stefanprodan/podinfo:3.1.2

Execute the load tester pod using the following command:

  1. kubectl -n test exec -it flagger-loadtester-xx-xx sh

Generates an HTTP 500 error:

  1. watch -n 1 curl http://podinfo-canary.test:9898/status/500

Generate Delay:

  1. watch -n 1 curl http://podinfo-canary.test:9898/delay/1

When the number of failed checks reaches the canary analysis threshold, traffic is routed back to the primary server, the canary is scaled to zero, and the rollout is marked as failed.

  1. kubectl -n test describe canary/podinfo
  2.  
  3. Status:
  4. Canary Weight: 0
  5. Failed Checks: 10
  6. Phase: Failed
  7. Events:
  8. Starting canary analysis for podinfo.test
  9. Pre-rollout check acceptance-test passed
  10. Advance podinfo.test canary weight 5
  11. Advance podinfo.test canary weight 10
  12. Advance podinfo.test canary weight 15
  13. Halt podinfo.test advancement success rate 69.17% < 99%
  14. Halt podinfo.test advancement success rate 61.39% < 99%
  15. Halt podinfo.test advancement success rate 55.06% < 99%
  16. Halt podinfo.test advancement request duration 1.20s > 0.5s
  17. Halt podinfo.test advancement request duration 1.45s > 0.5s
  18. Rolling back podinfo.test failed checks threshold reached 5
  19. Canary failed! Scaling down podinfo.test

Custom Indicators

Canary analysis can be extended via Prometheus queries.

Let's define a check for not finding errors. Edit canary analysis and add the following indicator:

  1. analysis:
  2. metrics:
  3. - name : "404s percentage"  
  4. threshold: 3
  5. query: |
  6. 100 - sum (
  7. rate(
  8. response_total{
  9. namespace= "test" ,
  10. deployment= "podinfo" ,
  11. status_code!= "404" ,
  12. direction= "inbound"  
  13. }[1m]
  14. )
  15. )
  16. /
  17. sum (
  18. rate(
  19. response_total{
  20. namespace= "test" ,
  21. deployment= "podinfo" ,
  22. direction= "inbound"  
  23. }[1m]
  24. )
  25. )
  26. * 100

The above configuration verifies the canary version by checking if the HTTP 404 req/sec percentage is below 3% of the total traffic. If the 404s rate reaches the 3% threshold, the analysis is aborted and the canary is marked as failed.

Trigger a canary deployment by updating the container image:

  1. kubectl -n test set image deployment/podinfo \
  2. podinfod=stefanprodan/podinfo:3.1.3

Generates a 404:

  1. watch -n 1 curl http://podinfo-canary:9898/status/404

Monitor Flagger logs:

  1. kubectl -n linkerd logs deployment/flagger -f | jq .msg
  2.  
  3. Starting canary deployment for podinfo.test
  4. Pre-rollout check acceptance-test passed
  5. Advance podinfo.test canary weight 5
  6. Halt podinfo.test advancement 404s percentage 6.20 > 3
  7. Halt podinfo.test advancement 404s percentage 6.45 > 3
  8. Halt podinfo.test advancement 404s percentage 7.22 > 3
  9. Halt podinfo.test advancement 404s percentage 6.50 > 3
  10. Halt podinfo.test advancement 404s percentage 6.34 > 3
  11. Rolling back podinfo.test failed checks threshold reached 5
  12. Canary failed! Scaling down podinfo.test

If you have Slack configured, Flagger will send a notification stating why the canary failed.

Linkerd Ingress

There are two ingress controllers that are compatible with Flagger and Linkerd: NGINX and Gloo.

Install NGINX:

  1. helm upgrade -i nginx-ingress stable/nginx-ingress \
  2. --namespace ingress-nginx  

Create an ingress definition for podinfo that rewrites incoming headers to the internal service name (required for Linkerd):

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name : podinfo
  5. namespace: test
  6. labels:
  7. app:podinfo
  8. annotations:
  9. kubernetes.io/ingress.class: "nginx"  
  10. nginx.ingress.kubernetes.io/configuration-snippet: |
  11. proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster. local :9898;
  12. proxy_hide_header l5d-remote-ip;
  13. proxy_hide_header l5d-server-id;
  14. spec:
  15. rules:
  16. - host: app.example.com
  17. http:
  18. paths:
  19. - backend:
  20. serviceName:podinfo
  21. servicePort: 9898

When using an ingress controller, Linkerd traffic splitting does not apply to incoming traffic because NGINX runs outside the mesh. To run canary analysis on the frontend application, Flagger creates a shadow ingress and sets NGINX-specific annotations.

A/B Testing

In addition to weighted routing, Flagger can also be configured to route traffic to canaries based on HTTP match conditions. In an A/B testing scenario, you will use HTTP headers or cookies to target your specific user base. This is particularly useful for front-end applications that require session affinity.

Flagger Linkerd Ingress

Edit the podinfo canary analysis, set the provider to nginx, add the ingress reference, remove the max/step weights and add match conditions and iterations:

  1. apiVersion: flagger.app/v1beta1
  2. kind: Canary
  3. metadata:
  4. name : podinfo
  5. namespace: test
  6. spec:
  7. # ingress reference
  8. provider: nginx
  9. ingressRef:
  10. apiVersion: extensions/v1beta1
  11. kind: Ingress
  12. name : podinfo
  13. targetRef:
  14. apiVersion: apps/v1
  15. kind: Deployment
  16. name : podinfo
  17. autoscalerRef:
  18. apiVersion: autoscaling/v2beta2
  19. kind: HorizontalPodAutoscaler
  20. name : podinfo
  21. service:
  22. # container port
  23. port: 9898
  24. analysis:
  25. interval: 1m
  26. threshold: 10
  27. iterations: 10
  28. match:
  29. # curl -H 'X-Canary: always' http://app.example.com
  30. - headers:
  31. x-canary:
  32. exact: "always"  
  33. # curl -b 'canary=always' http://app.example.com
  34. - headers:
  35. Cookies:
  36. exact: "canary"  
  37. # Linkerd Prometheus checks
  38. metrics:
  39. - name : request-success-rate
  40. thresholdRange:
  41. min : 99
  42. interval: 1m
  43. - name : request-duration
  44. thresholdRange:
  45. max : 500
  46. interval: 30s
  47. webhooks:
  48. - name : acceptance-test
  49. type: pre-rollout
  50. url: http://flagger-loadtester.test/
  51. timeout: 30s
  52. metadata:
  53. type: bash
  54. cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token"  
  55. - name : load -test
  56. type: rollout
  57. url: http://flagger-loadtester.test/
  58. metadata:
  59. cmd: "hey -z 2m -q 10 -c 2 -H 'Cookie: canary=always' http://app.example.com"  

The above configuration will run a 10 minute analysis targeting users who have the canary cookie set to always or who call the service with the X-Canary: always header.

Note that the load test now targets an external address and uses a canary cookie.

Trigger a canary deployment by updating the container image:

  1. kubectl -n test set image deployment/podinfo \
  2. podinfod=stefanprodan/podinfo:3.1.4

Flagger detects that the deployment revision has changed and starts an A/B test:

  1. kubectl -n test describe canary/podinfo
  2.  
  3. Events:
  4. Starting canary deployment for podinfo.test
  5. Pre-rollout check acceptance-test passed
  6. Advance podinfo.test canary iteration 1/10
  7. Advance podinfo.test canary iteration 2/10
  8. Advance podinfo.test canary iteration 3/10
  9. Advance podinfo.test canary iteration 4/10
  10. Advance podinfo.test canary iteration 5/10
  11. Advance podinfo.test canary iteration 6/10
  12. Advance podinfo.test canary iteration 7/10
  13. Advance podinfo.test canary iteration 8/10
  14. Advance podinfo.test canary iteration 9/10
  15. Advance podinfo.test canary iteration 10/10
  16. Copying podinfo.test template spec to podinfo- primary .test
  17. Waiting for podinfo- primary .test rollout to finish: 1 of 2 updated replicas are available
  18. Promotion completed! Scaling down podinfo.test

<<:  HTTP 2.0 Interview Pass: Mandatory Caching and Negotiated Caching

>>:  GSMA: Global 5G connections will reach 1.8 billion by 2025

Recommend

Basic forms of edge computing in the 5G era

Cloud computing is based on technologies such as ...

5G is not about mobile phones, but about the Internet of Things.

[[320662]] Recently, new infrastructure has conti...

Terminals have become a major obstacle to the development of 5G

[[389800]] On March 26, China Unicom finally anno...

Can 5G RedCap technology help operators regain confidence?

As my country has built the world's largest 5...

China's digital economy reaches a turning point from big to strong

[[396176]] On April 25, the Cyberspace Administra...

How should building owners prepare for 5G?

[[347744]] Few technologies have been in the spot...

How does 5G unlock the development potential of VR?

In March 2014, Facebook announced that it would a...

Application of modular power distribution system in high-density data center

Traditional data center power distribution archit...

Eight facts about data center design and construction

This article points out eight facts in data cente...

Three ways to sign in with single sign-on, awesome!

[[374892]] This article is reprinted from the WeC...