Debunking three myths about edge computing

Debunking three myths about edge computing

【51CTO.com Quick Translation】With millions of machines and devices connected to the Internet every day, companies are challenging traditional architectures by changing the way we view cloud infrastructure through edge computing. In fact, Gartner predicts that more than 40% of enterprise IT departments will adopt edge computing strategies, an increase of 1% from last year.

Edge computing continues to be a hot topic of discussion in the industry as more sensors, mobile devices, and powerful applications push data to the edge of our networks. More companies continue to place computing resources at the edge of the network, as close as possible to the devices that generate data and insights.

[[240541]]

As organizations begin to look at edge computing, misconceptions are clouding their potential migration. Here are three myths related to edge computing resources.

The first myth: Edge computing is resource-intensive

Although edge computing requires local resources outside of a typical data center, the resources required are minimal. Standard or even small data centers at the edge are not necessary to connect and process data at the edge of the network.

Edge computing processes data at the edge of the network, where information is generated and where a remote main data center or cloud has limited capabilities. By placing computing sources next to where data is collected, we can significantly improve our ability to respond to events such as cybersecurity breaches or take advantage of real-time changes in markets and consumer behavior.

Computing infrastructure can be as small as an IoT device or as large as a micro data center consisting of multiple computing devices. Imagine computing environments in remote offices or branch offices. In the case of edge computing, resources may be close to manufacturing systems, medical equipment, point of sale, and IoT devices.

Myth 2: Edge computing doesn’t need to change

Edge computing may require multiple network providers and connection points to support the full load of the edge data center. Diversity and redundancy are critical: if a network provider fails or has an outage, the enterprise organization can still provide the same high-quality service. In the case of edge computing, the computing source may come from a cellular base station or a nearby metropolitan area network.

Building an edge network means changing the way you manage and run your data center. Your system is no longer housed in a large, easily accessible building with an on-site operations team. With hardware deployed in modular enclosures at remote sites that take time to reach, you have to build systems that are more like cellular networks.

Network performance in or near a data center is often taken for granted thanks to standard high-availability connectivity and power systems. But at the edge, it’s absolutely essential.

Myth 3: Edge computing is all-inclusive

It’s no surprise that some vendors will tell you they can provide an easy path to this new type of network that combines compute and storage at the edge. Edge data centers are not one-size-fits-all; installed systems may be a single server, a standalone rack, or even two dozen or three dozen of them. Regardless of size, they require the right equipment. But don’t think of edge data centers as cheap, small pieces of infrastructure. Think of each individual node as a data center designed and tested to support your business needs.

Edge computing environments are small enough to run without dedicated IT staff. But to operate in a low-maintenance manner, the infrastructure needs to be easy to implement and manage, and easily connect to the main data center or cloud as needed. Most data centers require on-site staff to work in shifts to maintain equipment. This is not possible with edge computing because you are managing multiple small data centers in different locations along with your data center assets.

This arrangement requires remote monitoring and a lot of automation, and may require redundant hardware to address access issues. Edge computing applications need to be self-healing or able to fail over to nearby nodes or data centers to maintain service levels. So far, the industry has not yet formed a lot of best practices in this area. We are still in trial and error mode to some extent, but once we overcome the difficulties and perfect this approach, the computing landscape will be completely different.

Original title: Busting Three Edge Computing Myths, author: Jason Collier

[Translated by 51CTO. Please indicate the original translator and source as 51CTO.com when reprinting on partner sites]

<<:  A network like a boyfriend: A study of intent-based networking systems from Gartner

>>:  Differences between fat AP and thin AP, advantages and disadvantages of networking

Recommend

China's three major operators will be delisted in the United States on March 9

On January 28, China Telecom, China Mobile and Ch...

Gartner predicts that global IT spending will reach $4 trillion in 2021

Gartner, the world's leading information tech...

Aruba ESP Unveils New Enhancements to Secure Enterprises from Edge to Cloud

Aruba, a Hewlett Packard Enterprise (NYSE: HPE) c...

IDC survey: Only 9% of enterprises plan to use 5G for IoT deployment

5G promises to be ten times faster than existing ...

What you need to know about 5G?

Back in the 1G era, we could only make and receiv...

Wireless router, how many little secrets do you have?

Everyone has a wireless router at home. However, ...