How should we carry out continuous delivery of software based on containers (I)

How should we carry out continuous delivery of software based on containers (I)

Overview

In the past period of time, containers have been widely used in all aspects of IT software production: from software development, continuous integration, continuous deployment, test environment to production environment.

In addition to Docker's official Docker Swarm, Docker Machine and Docker Compose, the open source software community has also emerged with a series of container-related tools that cover all aspects of container orchestration, scheduling, monitoring, logging, and more.

This article will focus on the software development process and solve the problem of continuous software delivery and team collaboration based on containers.

Using containers in continuous integration

Build a unified environment management

In the traditional mode, using continuous integration tools such as Jenkins, the first problem in deploying an enterprise continuous integration platform is the diverse build environment requirements. The usual practice is to assign the build agent (server or virtual machine) to the team and let the team manage the environment configuration information of the build server and install the corresponding build dependencies.

Using Docker in Continuous Integration

  1. docker run --rm -v pwd :/workspace -v /tmp/.m2/repository:/root/.m2/repository --workdir /workspace maven:3-jdk-8 /bin/sh -c 'mvn clean package'  

As shown above, we can easily build the software package through the container. There are several points to note:

The --rm command ensures that the container created during the build is automatically cleaned up after the command is executed. I think you don’t want to have to clean up the build server disk from time to time.

-v In addition to mounting the current source code into the container, we can also cache some dependencies required for the build by mounting the disk, such as the jar package downloaded by Maven, to improve compilation efficiency.

--workerdir is used to specify the working path for the build command to be executed. Of course, it needs to be consistent with the workspace.

As mentioned above, based on containers we can quickly build a CI build environment that adapts to a variety of build requirements. All you need is Docker on your build server.

Using docker-compose in continuous integration

In some cases, we may need to use some real third-party dependencies, such as databases or cache servers, during the build or integration test phase. In traditional continuous integration practices, you usually either use the deployed database directly (remember to clean up the test data and ensure concurrency), use the in-memory database instead of the real database, or use mocks or stubs for testing.

Of course, ideally we still want to use a real database or other middleware service that is consistent with the real environment. Based on docker-compose, we can easily meet the needs of complex build environments.

  1. build: command: sh -c 'mvn --help' image: maven:3-jdk8 links: [mysql] volumes:  
  2. - '.:/code'    
  3. - '/tmp/.m2/repository:/root/.m2/repository' working_dir: /codemysql: environment: {MYSQL_DATABASE: test, MYSQL_PASSWORD: test, MYSQL_ROOT_PASSWORD: test, MYSQL_USER: test} image: mysql:5.5

Let's take Maven as an example. Suppose we need to use MySQL in the build to support the requirements of integration testing.

  1. docker-compose run --rm build sh -c 'mvn clean package' && docker-compose stop && docker-compose rm -f  
  • rm ensures that the container generated by the build is automatically cleaned up after the build command is executed.
  • docker-compose stop && docker-compose rm -f ensures that other dependent services such as mysql can exit normally and clean up the generated containers.

Building a Continuous Delivery Solution

Establishing a cross-functional R&D team with common goals is the foundation of the DevOps movement. Automation is the cornerstone of improving efficiency. Based on the above, how do we build our continuous delivery solution based on containers?

Infrastructure Automation

The reason for using Rancher is simple. Rancher is the only container management platform on the market that can be used out of the box. It can also support multiple orchestration engines, such as Rancher's own Cattle, Google's K8S, and Docker's official Swarm as container orchestration engines. At the same time, the Catalog application store provided by Rancher can help the R&D team independently create the required service instances.

Creating a continuous delivery pipeline

The core issue of establishing a continuous delivery pipeline is how to define the enterprise's software delivery value flow.

As shown in the figure below, we summarize the use of some typical tools used in various stages of development, continuous integration, and continuous delivery, as well as the relevant activities of the relevant teams in each stage, and typical DevOps-related activities.

Team collaboration in the continuous delivery pipeline

As mentioned above, the essence of creating a continuous delivery pipeline is to define the value flow of software delivery and reflect the formal software delivery process. The flow of value involves a high degree of collaboration among members of various functions in the team.

In the container-based continuous delivery practice, images are used as value transfer between people in different functions.

  • Developers: Frequently submit continuous integration, and generate a list of testable candidate images (such as 0.1-dev) through continuous compilation, packaging, testing, image building, automated acceptance testing, etc.
  • Tester: Select the target image to be tested from the candidate test image list, mark it as the test version (mark 0.1-dev as 0.1-test), and automatically deploy the image to be tested to the acceptance test environment to complete manual exploratory testing. Mark the image that has been tested as a pre-release version (mark 0.1-test as 0.1-beta).
  • Operation and maintenance personnel: select an image from the pre-release image list and deploy it to the pre-release environment. After verification, mark it as a release version (such as marking 0.1-beta as 0.1-release), and release it to the production environment.

In the container-based continuous delivery implementation solution, we use images as the unit of value transfer. Through continuous testing and verification of images, we complete the transformation of images from development and testing to releasable states and complete the software delivery process.

<<:  Is there still room for wireless mesh networking in the enterprise?

>>:  How Amazon can achieve continuous delivery

Recommend

Verizon is embarrassed: 5G speed is slower than 4G

According to foreign media, PCMag recently tested...

How to use logview to diagnose slow jobs in MaxCompute

Here we divide the problems of slow task running ...

Blockchain is not to blame for the ICO being strangled!

Recently, ICO has attracted a lot of attention. F...

A look back at five major outages in 2019

Any time a network service outage occurs, it can ...

MaxCompute Spark resource usage optimization

1. Overview This article mainly explains MaxCompu...

Industry Observation | Impact of 5G on the Environment

Investigating the technical, environmental and so...