Three Misconceptions about Edge Computing

Three Misconceptions about Edge Computing

With millions of machines and objects connected to the Internet every day, companies are challenging traditional architectures by changing the way we think about cloud infrastructure through edge computing. In fact, Gartner expects that more than 40% of enterprise IT organizations will adopt an edge computing strategy, an increase of 1% from last year.

In today's world, edge computing continues to dominate industry discussions. As more sensors, mobile devices, and powerful applications drive data at the edge of the network, more and more companies are placing computing resources at the edge of the network to be close to the devices generating the data.

As organizations begin to look at edge computing, they are being clouded by misconceptions. Here are the top three myths surrounding edge computing.

[[240411]]

Myth 1 - Edge computing is resource-intensive

Edge computing requires local resources outside of a typical data center, and the resources required can be minimal. A full or even small data center at the edge is unnecessary to connect and process data on the edge of the network.

Edge computing is data processing at the edge of the network, where information is generated with limited capacity in a remote main data center or cloud. By placing computing sources next to the source where data is collected, we can significantly improve responses to events such as cybersecurity breaches, or take advantage of real-time changes in markets and consumer behavior.

Computing infrastructure can be as small as an IoT device or as large as a micro data center with multiple computing devices. Imagine a remote office or branch office computing environment where resources can be adjacent to manufacturing systems, medical equipment, point of sale, and IoT devices through edge computing.

Myth 2 - Edge computing doesn’t need to change

Edge computing may require multiple network providers and connection points to support the full load of the edge data center. Diversity and redundancy are critical so that if a network provider fails or is lost, the organization can still provide the same high-quality service. With edge computing, the computing source can run at a cell site or nearby metropolitan area network.

Building an edge network means changing the way you manage and run your data center. Your system is no longer located in a large, easily accessible building with an on-site operations team. Because the hardware is deployed in modular enclosures at remote sites that take time to arrive, you have to build something more like a cellular network.

Network performance inside or near a data center is often taken for granted thanks to standard high-availability connectivity and power systems. But at the edge, it’s an absolute necessity.

Myth 3 - Edge computing is for everyone

It’s no surprise that some vendors will tell you they can provide an easy path to this new type of network, combining compute and storage together. Edge data centers are not monolithic, and installations can be anything from a single server to a self-contained rack to 20 or 30. Regardless of size, they require the right equipment. However, instead of thinking of edge data centers as cheap and small infrastructure, think of each individual node as a data center that must be designed and tested to support business needs.

Edge computing environments are small enough to operate without dedicated IT staff. However, to operate in a low-maintenance manner, the infrastructure needs to be easy to implement and manage, and easily connect to the main data center or cloud as needed. Most data centers require on-site staff to work in shifts to maintain equipment. This is not possible in edge computing because you are managing multiple small data centers in different locations, along with your data center assets.

This arrangement will require remote monitoring and a lot of automation. Redundant hardware may be needed to address access issues. Edge computing applications will need to be able to self-heal, or be able to fail over to a nearby node or data center to maintain service. So far, the industry has not established a lot of practices in this regard. We are still in trial and error mode, but once we crack and perfect this approach, the computing environment will become a completely different world.

<<:  Huawei Cloud launches full-stack private cloud solution to support enterprise cloud transformation

>>:  Operators deepen 5G business layout based on cloud computing

Recommend

5G and IoT set off a revolutionary wave and provide new value

[[284710]] [51CTO.com Quick Translation] In today...

[Black Friday] edgeNAT: 7 yuan/month-1GB/10GB/2TB/Seattle Data Center

edgeNAT has released a special Black Friday packa...

80% of the country's fiber optic access 100M broadband can't open web pages

According to data released by the Ministry of Ind...

Preliminary study of the network communication module in node

At present, we are in the Internet era, and Inter...

What exactly is the “computing power network”?

What is a “computing network”? Let’s get straight...

Everything about Http persistent connection, volume for you to see

[[438351]] My conclusion from the above is: HTTP ...

From comfort zone to challenge zone, operators enter a period of deep adjustment

Data released by the Ministry of Industry and Inf...