Edge computing workloads: VMs, containers, or bare metal?

Edge computing workloads: VMs, containers, or bare metal?

We live in an age of connected and smart devices. As the number of smart devices grows, data growth is rapidly reaching new heights. This data travels from the end user to the cloud or data center for processing, storage, and other analytical operations, so when accessed, it inevitably brings latency and bandwidth issues. As Nati Shalom wrote in his blog post “What is Edge Computing?”, edge computing is essentially moving processing power to the edge of the network, closer to the data source. This enables organizations to gain significant advantages in terms of the speed at which data is accessed and the bandwidth consumed.

Because the edge plays such a critical role, it is equally important to consider the infrastructure technologies on which edge workloads run.

Providing technology for edge workloads

We have seen an entire paradigm shift in infrastructure technology, starting with physical servers, all the way to the birth of virtual machines (VMs), and now the latest is containers. While VMs have done a great job over the past decade or so, containers offer inherent advantages over VMs. They are also ideal for running edge workloads.

The following diagram describes how containers work compared to VMs.

Each VM runs a unique operating system on top of a shared hypervisor (software or firmware layer), resulting in “hardware-level virtualization.” In contrast, containers run on top of physical infrastructure and share the same kernel, resulting in “OS-level virtualization.”

This shared OS keeps the size of containers in megabytes, making them very “light” and flexible, reducing boot time to seconds, compared to minutes for VMs. Also, since containers share the same OS, management tasks (patching, upgrading, etc.) are reduced for OS administrators. On the other hand, in the case of containers, a kernel vulnerability can bring down the entire host. But if the attacker simply routes through the host kernel and hypervisor before reaching the VM kernel, VMs are still the better choice.

Today, a lot of research is going on towards the goal of bringing bare metal capabilities to edge workloads. Packet is one such organization working towards a unique proposition that meets the needs of low latency and local processing.

Containers on VMs or bare metal?

CenturyLink conducted an interesting study on running Kubernetes clusters on bare metal and virtual machines. For this test, an open source utility called netperf was used to measure the network latency of the two clusters.

Since physical servers don’t have a hypervisor as an overhead, the results are in line with expectations. Kubernetes and containers running on bare metal servers have significantly lower latency; in fact, three times lower than when running Kubernetes on VMs. Additionally, CPU consumption is significantly higher when running the cluster on VMs compared to bare metal.

Should all edge workloads run on bare metal?

While databases, analytics, machine learning algorithms, and other data-intensive enterprise applications are ideal candidates for running containers on bare metal, there are some advantages to running containers on VMs. Out-of-the-box features (such as movement of workloads from one host to another, rollback to previous configurations in case of any issues, software upgrades, etc.) can be easily achieved in VMs compared to bare metal environments.

Therefore, as mentioned before, lightweight and fast start/stop containers are a great fit for edge workloads. There is always a trade-off when running on bare metal or VMs.

Public cloud and edge workloads

Most public clouds, including Microsoft Azure and Amazon, offer Container as a Service (CaaS). Both are built on top of existing infrastructure layers, based on virtual machines, thus providing the portability and flexibility required for edge computing.

AWS also launched "Greengrass" as a software layer to extend cloud-like capabilities to the edge, enabling local information collection and execution.

Let’s see how it works.

Greengrass Group contains two components. The first is the Greengrass core, which is used to execute AWS Lambda locally, messaging, and security. The second is IoT, SDK-enabled devices that communicate with the Greengrass core over the local network. If the Greengrass core loses communication with the cloud, it will still maintain communication with other local devices.

Enterprise adoption and the challenges involved

Containers are one of the hottest technologies due to the speed, density, and flexibility they offer. Security can create barriers for enterprises to adopt edge workloads on containers. Two of the main issues are:

  • Denial of service: When one application is running, it may consume a large portion of the operating system resources, thereby depriving other applications of the minimum resources they need to continue running, and ultimately forcing the operating system to shut down.
  • Kernel exploitation: Containers share the same kernel, so if an attacker gains access to the host operating system, they have access to all applications running on the host.

The Way Forward: Latest Updates

Amid the various developments in infrastructure technology, New York-based startup Hyper is working to provide the best of both worlds: VMs and containers. With HyperContainers (as Hyper calls it), we see a convergence between the two. It provides the speed and flexibility of containers, namely the ability to spin up instances in less than a second with a minimal resource footprint. At the same time, it provides the security and isolation of VMs, namely preventing the shared kernel issues of containers through hardware-enforced isolation.

Original link:

https://enterpriseiotinsights.com/20171010/opinion/edge-computing-workloads-vms-containers-or-bare-metal-tag10

<<:  Operators deepen 5G business layout based on cloud computing

>>:  Frontier | The Internet of Vehicles security ecosystem is taking shape

Recommend

Hostodo: $12/year KVM-256MB/15GB/500GB/Spokane (Washington)

Hostodo launched a new server in the middle of th...

Looking back at the shadows that 2G brought to us in those years

[[247708]] Image source: Visual China There is no...

Did you know that subset problems are actually template problems?

[[426614]] After understanding the essence, this ...

Easy to understand, this article will introduce you to the HTTP protocol?

1. What is http? Http protocol is Hypertext trans...

HostDare: 25% off NVMe disk VPS in Los Angeles starting at $19.49/year

HostDare has launched a new promotion for the Los...

LOCVPS Double 11 Top up 300 get 50, all VPS 20% off

LOCVPS (Global Cloud) has announced the Double 11...

Everything is connected and edge computing is intelligent

On November 30, the 2017 Edge Computing Industry ...

How to design a distributed ID generator?

Hello everyone, I am Brother Shu. In complex dist...

7 New Year's Resolutions for the Internet of Things

The beginning of a new year is often a time for p...