5G ushers in new “genius” networks to cope with the increasing levels of complexity, prediction and real-time decision making, which not only brings the promised performance gains in enhanced mobile broadband applications, but also in IoT and mission-critical use cases. At the heart of this evolutionary step is the use of machine learning algorithms. This made the network “intelligent” in the 4G era by becoming more dynamic with real-time network optimization features such as resource loading, power budget balancing, and interference detection. 5G adds support for new antenna capabilities, high-density and heterogeneous network topologies, and uplink and downlink channel allocation and configuration based on payload type and application. While there are many uses for machine learning in all layers of 5G networks from the physical layer to the application layer, base stations are emerging as a key application for machine learning. More resources to coordinate means better performance One of the hallmarks of next-generation 5G base stations is the use of advanced antenna capabilities. These capabilities include, but are not limited to, massive multiple-input multiple-output (MIMO) antenna arrays, beamforming, and beam steering. Massive MIMO is the use of antenna arrays with a large number of active elements. Depending on the frequency band in which it is deployed, a massive MIMO design can use 24 active antenna elements to as many as hundreds. One of the uses of MIMO in general is the ability to send and receive parallel and redundant streams of information to account for errors caused by interference. However, another specific use of massive MIMO is beamforming, and in more advanced systems, beamsteering. Beamforming is the ability to utilize a set of phased arrays to create an energy beam that can be used to focus and expand the signal transmission and reception between a base station and a specific mobile device. Beam steering is the ability to control the beam to follow devices within the coverage area of the antenna array throughout a mobile environment. When massive MIMO is fully utilized and beam forming and beam steering are optimized, network operators and consumers alike will benefit from increased network capacity and extended range for increased data flow, reduced interference, improved range, and more optimized power efficiency. But how can machine learning help with this? Imagine if you were to have a race between a 10-oar boat and a 20-oar boat. The boat with the 10-oar is coordinated not only by cadence, but also by making real-time corrections to its heading and cadence based not only on what is happening right now, but also on what is predicted to happen in the future. Conversely, a boat with 20 oars has one captain who cannot coordinate cadence and can only make corrections based on general information about what has already happened. Obviously, the former will win the race, while the latter’s oars not only make little progress, but in some cases actually interfere with each other. The same is true for massive MIMO. To fully realize the benefits of massive MIMO capabilities, beamforming and beamsteering, base stations employ machine learning techniques to provide real-time and predictive analytics and modeling to better schedule, coordinate, configure and select which arrays to use and when. Precise positioning The new 5G network standards call for higher density deployments of smaller cells working alongside larger macro cells and multiple air interface protocols. The vision is to design smaller cells for indoor locations or dense urban environments where GPS positioning is not always reliable and the radio frequency (RF) environment is far from predictable. Understanding the location of devices interacting with the network is critical not only for application layer use cases but also for real-time network operations and optimization. Therefore, it is crucial to find a way to not only accurately locate where a user device is located but also to track its movement within the coverage area. To this end, machine learning is being used to estimate the location of user devices using RF data and triangulation techniques. While this is not a new concept, the use of machine learning algorithms has produced substantial improvements in accuracy, precision, and feasibility for widespread use compared to previous approaches. This is all the more important because these improvements are being achieved in environments that are orders of magnitude more complex and dynamically variable than ever before. One network to rule them all - not as easy as it sounds One of the drivers for the development of 5G is to have a framework to meet the changing and often conflicting needs of 3 use cases, including enhanced mobile broadband (eMBB), massive IoT, and mission-critical applications. These use cases, previously served by dedicated heterogeneous networks, will now be supported by 5G network architectures while continuing to require conflicting capabilities. Networks designed to support EMBB use cases need to be optimized for high speed, low to medium latency, and profitable capacity. On the other hand, large-scale IoT networks require low cost, narrow bandwidth, low control plane overhead, and high reliability. Mission-critical networks require high speed, low latency, and high reliability. To make this vision a reality, 5G has been designed to be highly variable and flexible in control plane and channel configuration. It is therefore critical that 5G networks be able to predict payload types and usage based on changing conditions (such as historical load data, RF conditions, location, and various other factors) in order to efficiently and dynamically configure and utilize 5G channel resources. Thus, machine learning is being used not only to predict the characteristics and capabilities of a user’s device, likely use case requirements, and RF conditions, but also to potentially predict the type of content most likely to be requested and bring that content closer to the end user using edge caching techniques. For example, based on historical trend data, it may be discovered that specific movies should be produced at certain times of the day due to the distance of the base station to the university and the current trending titles on Netflix or Disney+. Being closer to that base station can be used to reduce network congestion, buffering, and latency. Similarly, a base station near an intersection that becomes congested at certain times of the day may require more traffic and V2X sensor data to help assist ADAS or autonomous driving applications. The next step in evolution As an industry, we are at a critical evolutionary point as 5G and machine learning come together, setting us on a path for leapfrogging network capabilities and efficiencies enabled by increasingly sophisticated functionality and adaptability. But this is an evolution, not a revolution, and these are very early days. These 5G machine learning applications are just the beginning of the potential that can be unlocked not only at the physical layer enabled by the base station, but as these two foundational technologies converge and move into the application layer, we enter the era of genius networks. |
<<: Omdia’s view: 400G is ready as a carrier service
[[417941]] This article is reprinted from the WeC...
The most direct users of the campus network - Int...
[51CTO.com original article] Recently, the WOT201...
CloudCone also launched a promotion for Double 12...
[51CTO.com Quick Translation] Introduction: This ...
[[379606]] This article is reprinted from the WeC...
It often takes many years for network technology ...
"Now that I'm using 5G, I don't thin...
At this year's Broadband World Forum (BBWF 20...
[51CTO.com original article] On the second day of...
[[436288]] Last weekend, the 2021 China 5G+ Indus...
The 5G network security market is expected to gro...
With the development and popularization of popula...
Hey! Dear friends, long time no see. Today I brin...