How to Understand Fog Computing and Edge Computing in Simple Terms

How to Understand Fog Computing and Edge Computing in Simple Terms

Over the past few decades, there has been a huge shift from on-premises software to cloud computing. By storing data and performing computational processes in the cloud, we have been able to do more on our phones, personal computers, or IoT devices without adding corresponding additional memory or computing power. However, with the increasing popularity of the Internet of Things, we are about to see things start to move in the other direction.

There are many reasons for this change, including the need for extremely low latency in certain applications, such as self-driving cars. Moving computing power closer to the edge of the network reduces costs and improves safety.

[[260586]]

Matt Vasey, who focuses on Microsoft’s IoT strategy, said:

“Ideal use cases for fog and edge computing include deploying computing intelligence at the edge where ultra-low latency is critical, operating in geographically dispersed and poorly connected areas, or generating terabytes of data that cannot be quickly transferred between local and cloud in real time.”

What is fog computing and edge computing

Let’s first briefly talk about the basic concepts of the two.

1. Fog Computing

The concept was first coined by Cisco in 2011 as a contrast to cloud computing. Rather than powerful servers, it consists of less powerful, more distributed computers with various functions, which permeate appliances, factories, cars, street lights and various objects in people's lives.

Simply put, it expands the concept of cloud computing. Compared with cloud computing, it is closer to where the data is generated. Data, data-related processing and applications are concentrated in devices at the edge of the network, rather than being stored almost entirely in the cloud. The name "fog" here comes from the saying "fog is a cloud closer to the ground."

2. Edge Computing

It further advances the concept of "local area network processing power" in fog computing, but in fact the concept of edge computing was proposed earlier than fog computing. The origin of edge computing can be traced back to the 1990s, when Akamai launched the content delivery network (CDN), which set up transmission nodes close to end users. These nodes can store cached static content such as images and videos.

The processing power of edge computing is closer to the data source, and its applications are initiated at the edge, resulting in faster network service response, meeting the basic needs of the industry in terms of real-time business, application intelligence, security and privacy protection, etc. Edge computing is between physical entities and industrial connections, or at the edge end of physical entities.

Fog computing has many similarities with edge computing

[[260587]]

The terms “fog computing” and “edge computing” seem more or less interchangeable, and they do have several key similarities.

  • Both fog computing and edge computing systems move data processing to the source of data generation;
  • Both try to reduce the amount of data sent to the cloud to reduce latency;
  • Through the above strategies, both can improve system response time in remote critical applications, increase system security by reducing the need to send data over the public Internet, and reduce costs.

Some applications may collect a lot of data, which is expensive to send to a central cloud service. But only a small amount of the data they collect may be useful. If some processing is done at the edge of the network and only relevant information is sent to the cloud, costs can be effectively reduced.

For example, with security cameras, it would be very expensive to send 24 hours of video to a central server, of which 23 hours might just be an empty hallway. With edge computing, you can choose to send only the hour in which something actually happened.

Both fog computing and edge computing involve processing data closer to the origin. The key difference is exactly where the processing occurs.

Fog computing and edge computing are used differently

As we can see, the two technologies are very similar. The fog computing process occurs on a local area network (LAN) level network architecture, using a centralized system that interacts with industrial gateways and embedded computer systems. Most of the data processed by edge computing comes from the IoT devices themselves.

To differentiate between them, let us consider the use case of a smart city.

[[260588]]

Imagine a smart city equipped with a smart traffic management infrastructure, with a sensor attached to the traffic light that detects how many vehicles are waiting on each side of the intersection and prioritizes turning the light green for the lanes with the most waiting vehicles. This is a fairly simple calculation that can be performed in the traffic light itself using edge computing. This reduces the amount of data that needs to be sent over the network, thereby reducing operating and storage costs.

Now, imagine those traffic lights were part of a network of connected objects, including more traffic lights, pedestrian crossings, pollution monitors, bus GPS trackers, and more.

The decision about whether to turn a traffic light green in five or ten seconds becomes more complicated. Maybe a bus is running late on one side of the intersection. Maybe it starts to rain, and to encourage residents to travel more actively, the city decides to prioritize pedestrians and cyclists when it rains. Is there a crosswalk or bike lane nearby? Is anyone using it? Is it raining? And so on.

In this more complex case, the computational logic will also be more complex, and we can deploy a micro data center locally to analyze data from multiple edge nodes. These micro data centers are like local mini clouds within a local area network and are considered fog computing.

So, which way is "better"?

According to a recent report by Million Insights, the global edge computing market size is expected to reach approximately $3.24 billion by 2025. As the Internet of Things continues to grow and produce more massive amounts of data, it will become imperative to process data close to the point of generation.

[[260589]]

Edge computing and fog computing will both play an important role in the future of IoT. Whether to use edge computing or fog computing is not really important, it will depend on the specific application and specific use case. Like many IoT application considerations, such as which type of connectivity to choose, the answer is not black and white. Whether fog computing or edge computing is "better" will depend on the specific IoT application and its requirements and desired outcomes.

<<:  Without IPv6, there is no future. Let’s talk about the necessity of deploying IPv6.

>>:  Did you know? Did you know? Telecom networks should focus on multi-layer orchestration

Recommend

5G: A new vision for industrial automation

The next generation of wireless connectivity, 5G,...

5G and manufacturing advantages: optimism tempered

5G-enabled factories will have the ability to mai...

Huawei fully supports Chinese operators in building China's 5G

This morning, the Ministry of Industry and Inform...

Distinguish between fat AP and thin AP, full WiFi signal coverage will be easy

Wireless AP is an access point for users who use ...

The battle for power saving in 5G mobile phones

As of the end of 2020, 718,000 5G base stations h...