Let’s talk about edge computing from another perspective: the power supply pressure and computing power ratio are not high, is it over-hyped?

Let’s talk about edge computing from another perspective: the power supply pressure and computing power ratio are not high, is it over-hyped?

Edge computing has become increasingly popular in recent years, and Internet companies, telecom operators, equipment manufacturers and many other fields are discussing edge computing. So, is the "hot" phenomenon of edge computing market hype or real demand? Dean Bubley, founder and director of "Disruptive Analysis" abroad, expressed his views on this. The following are his views.

Edge computing is a mesh network of micro data centers that can process or store critical data locally and push all received data to a central data center or cloud repository.

[[235912]]

It is often mentioned in the context of IoT use cases where edge devices collect data and it is all sent to a data center or cloud for processing. Edge computing sorts the data locally, reducing the backhaul traffic to the central repository as some of it is processed locally.

Typically, this is done by IoT devices transmitting data to local devices, which include small-form-factor computing, storage, and network connectivity. Data is processed at the edge, and all or part of it is sent to a central processing or storage repository in a company’s data center, co-location facility, or IaaS cloud.

Edge computing is important, but its capabilities are also overhyped. Network edge computing is only a small part of the overall cloud computing landscape. Because it is small, it will likely only be complementary to (and integrated with) web-scale cloud platforms. It is unlikely that we will see mainstream providers launch "next generation Amazon AWS, only distributed" with such a name.

1. Edge computing from the perspective of power consumption

Why is the field of network edge computing so small? Let's look at it from a different perspective: power. Edge computing is a metric used by people at the top and bottom of the computing industry, but rarely used by people in the middle, such as network owners. This means they are missing out on several orders of magnitude.

1. High power loads in data centers

Cloud computing involves a lot of data, such as the number of servers, processors, standard-sized equipment racks, floor space, etc. But the number that data center users use most is probably power consumption in watts, or more commonly kW, MW, and GW.

Power includes not only the demands of computing CPUs and GPUs, but also the storage and networking elements of the data center.

Roughly speaking, the total power consumption of big data centers around the world is about 100GW. A typical data center may have a capacity of 30MW, but the largest data centers in the world have a single capacity of more than 100MW, and there are even plans to expand to 600MW or even 1GW. But not all of them are running at full capacity, and this is true for any computing platform.

This growth is partly due to the increase in the number of servers and equipment racks needed (which also increases the floor space), but also to the power consumption of individual servers as chips become more powerful. Most equipment racks use 3-5kW of power, and some can go up to 20kW if power and cooling can be provided.

Therefore, 100GW is needed to power the “cloud”, and this number is continuing to grow rapidly. We are also seeing growth in small regional data centers in second- and third-tier cities. Companies and governments often have private data centers as well. The power required for these different areas varies greatly, and a benchmark of 1-5MW is usually reasonable.

2. Power consumption of “edge devices”

In addition to data centers, the equipment itself and the components inside it will also consume power. Especially for devices that need to be powered by batteries, it is critical to control power in watts or milliwatts. For example:

  • A sensor might use less than 10mW when idle and 100mW when actively processing data
  • The Raspberry Pi might use 0.5W
  • A smartphone processor might use 1-3W
  • IoT gateway (controlling various local devices) may be 5-10W
  • A laptop might need 50W
  • A decent crypto device might use 1kW

Innovation is shifting the power consumption threshold. Some researchers are working on sub-milliwatt vision processors, such as ARM’s design that can run machine learning algorithms on extremely low-power devices.

But perhaps the most interesting "edge device" is the future high-end Nvidia Pegasus board, aimed at self-driving cars. It's a 500W supercomputer. That might sound like a lot, but it's actually less than 1% of most car engine power. A high-end Tesla P100D in "ludicrous mode" delivers over 500kW to the wheels, or 1000x. A car's air conditioning might use 2kW.

Of course, there are many edge computing platforms. When we have billions of phones, billions of vehicles, and PCs, we will potentially have billions of sensors, but most are not coordinated.

3. Power consumption of the network middle layer

In distributed computing, you have milliwatts on one end, close to the edge of the network, and gigawatts on the other end, from the device to the cloud. What about the middle of the network?

Many companies are talking about MEC (multi-access edge computing) and fog computing products, with servers designed to run at cellular base stations, network aggregation points, fixed network nodes and other places.

Some are "micro data centers" capable of housing several server racks near the largest cell towers. The largest might be 50kW container-sized units, but these are very rare and require a dedicated power source.

It's worth noting that a typical macro cell tower might have 1-2kW of power. So if we consider that maybe 10% of that can be used for the compute platform instead of the wireless (a generous assumption), we could theoretically get to 100-200W. Or in other words, a cell tower edge node would have less than half the power of a single onboard computer.

Others are smaller server units designed to connect to small cells, home gateways, cable street-side cabinets, or enterprise "white boxes", for which 10-30W is more reasonable.

2. Imagine 2023

Imagine five years from now, there might be 150GW of large data centers, plus a fair number of mid-sized regional data centers, and private enterprise facilities.

We could have 10 billion phones, PCs, tablets, and other small endpoints contributing to the distributed edge, but obviously they would waste a lot of time in idle mode. We could also have 10 million nearly autonomous vehicles, which would require a lot of computation, if not fully autonomous.

Now, let's say we have 10 million "deep" network computing nodes, at cell sites large and small, built into WiFi APs or controllers, or in cable/fixed street cabinets. They'll probably be rated between 10W and 300W, though few will be able to hit 300W. Most will go for 100W, to allow for simpler computing. (Frankly, that's a generous prediction, but let's look at it.)

We add 20,000 container-sized 50kW units, or re-purpose a central office as a data center.

In other words, we might end up with:

  • 150GW large data center
  • 50GW regional and enterprise data centers
  • 20000 x 50kW = 1GW large/aggregation point “network edge” mini DC
  • 10m x 100W = 1GW of “deep” network edge nodes
  • 1bn x 50W = 50GW of PC
  • 10 billion x 1W = 10GW of “small” device edge computing nodes
  • 10m x 500W = 5GW of in-vehicle computing nodes
  • 10 billion x 100mW = 1GW of sensors and low-end devices

This is a very rough analysis. Many devices are idle most of the time and may need to offload functionality to save battery power. Laptops are often completely shut down. But again, network edge computers won't be running at 100%, 24x7.

3. Edge computing 1% computing power

So, at a rough, order-of-magnitude level, the overall actual “edge of the network” is less than 1% of total computing power, optimistically. And pessimistically, it’s probably only 0.1%.

Unless there is a massive upgrade of the power supply to the network infrastructure, along with the installation of backhaul upgrades for 5G or the deployment of FTTH, there will no longer be any power supply.

Blockchain-based edge “fogs” are unlikely to truly solve this problem, even if they also use decentralized, blockchain-based power supply and management.

This 0.1%-1% of computing workloads will be so important that they will need to bring everything into their orbit and indirectly control it. Can the "edge" really be the new frontier?

The answer is no, in fact, the opposite is more likely. Device-based applications will selectively offload certain workloads to the network, or webscale clouds will distribute certain functions. The network edge is the control point for certain verticals or applications. Some security functions make sense, for example, the evolution of today's CDN. But will IoT management or AI be centralized at these edge nodes? This seems unlikely.

IV. Conclusion

In the future, network edge computing architectures such as MEC will become more important. But despite this, their capabilities are not as powerful as the hype.

There are few applications that run only at the network edge, it will be for a specific workload or microservice as a subset of a broader multi-tier application, the main computation is still done on the device or in the cloud. Therefore, collaboration between edge computing providers and industry/web-scale clouds is necessary, because the network edge is only a component in a larger solution, and rarely the most important component.

One thing is certain: mobile operators will not become distributed “quasi-Amazons” that connect all nearby cars via 5G or their Industry 4.0 robots in their networks for image processing.

MEC nodes can host Amazon Greengrass or other functions in bulk, but few developers want to write directly to the telco’s distributed cloud APIs on a stand-alone basis, with or without network slicing or 5G QoS mechanisms.

<<:  Four SD-WAN misconceptions

>>:  Five technical challenges in deploying IoT networks

Recommend

Experts gather between REST, gRPC and GraphQL!

REST, GraphQL, and gRPC are the three most common...

How to unify heterogeneous networks within a home? 6LoWPAN is a good choice

Part 01 What is 6LoWPAN In order to enable low-sp...

Static routing or dynamic routing, an example to make it clear!

What is routing? Routing refers to the path that ...

OSI seven-layer and TCP five-layer protocols, why TCP/IP protocol wins

[[278277]] 1. OSI Reference Model 1. Origin of OS...

IoT and 5G: Transforming Public Transportation Systems

Smart cities collect data from different connecte...

Why some cities are reluctant to adopt 5G

Most of the discussion about 5G has centered arou...

Inspur Network Electronics Range Training Base officially launched

Recently, the "Inspur Network Electronic Tar...