Serverless Engineering Practice | Getting Started with Knative Applications from Zero Foundation

Serverless Engineering Practice | Getting Started with Knative Applications from Zero Foundation

Introduction to Knative

Knative implements its Serverless standard by integrating container building (or function), workload management (dynamic scaling), and event model.

In the Knative architecture, the collaboration between the roles is shown in the following figure.

Developers refer to the developers of Serverless services who can directly use the native Kubernetes API to deploy Serverless services based on Knative.
Contributors mainly refer to community contributors.
Knative can be integrated into supported environments, such as cloud vendors or enterprises. Currently, Knative is implemented based on Kubernetes, so it can be assumed that Knative can be deployed wherever Kubernetes is available.
Users refer to end users who access services through the Istio gateway or trigger the Serverless services in Knative through the event system.
As a general Serverless framework, Knative consists of three core components.
Tekton: Provides universal building capabilities from source code to images. The Tekton component is mainly responsible for obtaining source code from the code repository, compiling it into an image, and pushing it to the image repository. All these operations are performed in the Kubernetes Pod.
Eventing: Provides a complete set of event management capabilities such as event access and triggering. The Eventing component has a complete design for the Serverless event-driven model, including access to external event sources, event registration, subscription, and event filtering. The event model can effectively decouple the dependencies between producers and consumers. Producers can generate events before consumers start, and consumers can also listen to events before producers start.

Collaboration between roles in the Knative architecture

Serving: manages serverless workloads, can be well integrated with events, and provides request-driven automatic scaling capabilities, and can be scaled down to zero when there are no services to be processed. The responsibility of the Serving component is to manage workloads to provide services to the outside world. The most important feature of the Serving component is the ability to automatically scale. Currently, its scaling boundaries are unlimited. Serving also has the ability to release in grayscale.
Knative deployment

This article will take the deployment of Knative services on Alibaba Cloud as an example to explain in detail how to deploy Knative related services. First, log in to the Container Service Management Console, as shown in the figure.

Alibaba Cloud Container Service Management Console

If there is no cluster, you can choose to create a cluster first, as shown in the following figure.

Configuring and creating a cluster

Creating a cluster is slow. Please wait patiently for the cluster to be created. If successful, it will be as shown in the figure.

Schematic diagram of successful cluster creation

After entering the cluster, select "Application" on the left, find "Knative" and click "One-click Deployment", as shown in the figure.

Creating a Knative Application

After a while, when Knative is installed, you can see that the core components are in the "deployed" state, as shown in the figure.

Knative application deployment is complete

So far, we have completed the deployment of Knative.

Experience Test

First, you need to create an EIP and bind it to the API Server service, as shown in the following figure.

Quickly create a sample application

After the creation is complete, you can see that a Serverless application has appeared in the console, as shown in the figure.

The sample application was created successfully

At this point, we can click the application name to view the details of the application, as shown in the figure below.

View sample app details

To facilitate testing, you can set up the Host locally:

101.200.87.158 helloworld-go.default.example.com

After the settings are completed, open the domain name assigned by the system in the browser, and you can see that the expected results have been output, as shown in the figure.

Browser Test Sample App

So far, we have completed the deployment and testing of a Serverless application based on Knative.

At this point, we can also manage the cluster through CloudShell. On the cluster list page, select Manage through CloudShell, as shown in the figure.

Cluster management list

Manage the created cluster through CloudShell, as shown in the figure.

CloudShell window

Execute the command:

kubectl get knative

You can see the newly deployed Knative application, as shown in the figure.

CloudShell View Knative Application

<<:  It’s too late when the crisis happens! Only by following this zero-trust principle can we be stable enough

>>:  Promote high-quality development of 5G digital "new infrastructure" through development model innovation

Recommend

What are the differences and connections between 25G/50G/100G technologies?

In the past decade, 10G and 40G technologies have...

Phicomm N1 (Tiantian Chain) flash YYF voice version

More than 2 years ago, I recorded the process of ...

Is IT operation the next outlet for IT operation and maintenance?

With the development and popularization of popula...

CC attack & TCP and UDP correct opening posture

introduction: 1: CC attack is normal business log...

Zhao Rong wishes you a happy new year! Hongtu Exhibition!

The tiger is gone and the rabbit is here, everyth...

DesiVPS: $20/year KVM-1.5GB/20GB/2TB/Los Angeles & Netherlands Data Center

DesiVPS has launched a 2023 New Year promotion, w...

Read this article only three times and you will never forget network layering!

This article is reprinted from the WeChat public ...

The 6th generation of Wi-Fi technology is here! Speed ​​​​upgrade

While we are still struggling to decide whether t...