Alan Gatherer, Senior Technical Vice President, Baseband SoC, Huawei USA
Published: 5 Dec 2018
CTN Issue: November 2018
A note from the editor:
These days it really seems that everything gets better with AI, at least if the internet is to be believed. Certainly there has been a lot of activity in trying to use AI to improve wireless network operation. In this month’s article we take a look at what has been done and what impact it has, or may have, on wireless telecoms performance. Your comments and critique can be input at our man machine interface at the end of the article.
Machine Learning in 5G Wireless Networks
First the Hype…
AI is everywhere. And the wireless network business is as enthusiastic at singing its praises as anyone else. This newsfeed has published an article on using AI to completely define the protocol used to encode and decode a signal . One might consider this to be a 6G type of application in the commercial space at least, though companies already exist to try to commercialize this new field  and many keynotes and several papers lend a sense of urgency to this task.  and  suggest many ways to apply AI to the wireless network and in  they go so far as to say that “As such, the question is no longer if machine learning tools are going to be integrated into wireless networks but rather when such an integration will happen. In fact, the importance of an AI-enabled wireless network has already been motivated by a number of recent wireless networking paradigms such as mobile edge caching, context-aware networking, big data analytics, location-based services, and mobile edge computing”.
Immediate Challenges for AI in Wireless
So far so good, but as an old timer R&D guy in communications traditionally I see the rate of advancement significantly constrained by the need to be careful and not break the system. The network must remain stable, dropped calls must be maintained below an agreed upon percent and so on. Because the wireless network is already so complicated, don’t add more complication. But AI would seem to add complication and uncertainty to the network in great heaping spades, so our enthusiasm must surely be tempered by extreme caution. Indeed the recent, and excellent, overview of the key challenges of AI from the team at Berkeley  specifically addresses some of the concerns that would naturally be at the forefront of any attempt to apply AI to wireless networks. For instance they highlight the challenge of designing AI systems that “learn continually by interacting with a dynamic environment, while making decisions that are timely, robust, and secure.” The paper also puts strong emphasis on the explainability of any decision made by the AI stating “AI systems will often need to provide explanations for their decisions that are meaningful to humans. This is especially important for applications in which there are substantial regulatory requirements”. Certainly the wireless space has substantial regulation on its performance and when the AI system fails we will need a meaningful way to debug it and do it quickly in the field. And the AI guys are telling us that, well, they might not quite know how to do that yet. Having said that, the demand is clearly there for something and the goal is in fact simplicity. In  they refer to a study that showed that 56% of Mobile Network Operators globally have “little or no automation in their networks. But by 2025, according to their own predictions, almost 80% expect to have automated 40% or more of their processes, and one-third will have automated over 80%.” Automation’s goal is to simplify the control of the network to reduce OPEX and in  the complexity of the network is seen as a side effect of supporting multiple standard rather than support of 5G alone.
Finding Paths Forward
One way to apply AI type technology without scaring ourselves to death is in the area of defining parameters. There are many parameters in a network and quite a few of them are set using heuristic calculations because no solid closed form solution exists for their value, or the data needed to properly calculate them may not exist, or the calculation to properly set them may be prohibitively expensive. In these cases we are using human intuition and creativity to come up with good solutions. Alternatively we can conceive of using AI to train a neural network or equivalent to set the parameters based on the available data with reasonable complexity. Once trained in an offline manner the result can be thoroughly tested for stability before deploying into a live network. So far so good. It is now very tempting to take the next step of allowing these deployed neural networks to continue to train based on the data passing through them to get to an even better “local” performance result. Increasingly the wireless community is looking towards boutique optimizations especially for special scenarios like stadiums and dense urban canyons. This is partially because we have drained all of the good stuff out of Shannon’s theory when applied to flat hexagonal cells or even the more sophisticated Poisson Point Process (PPP) models and we need to start dealing more with the reality of deployments. Such dynamic optimizations can occur at various levels of the network, from optimizing the capacity of the physical layer chip through spectrum allocation all the way to QoS optimization of services in the core network. However we must be careful as we are now letting the machine make decisions in real time perhaps doing things that have never been tested. More specifically what is going on now is something called Reinforcement Learning (RL) in the AI world and our friendly experts at Berkeley just one year ago stated that “…despite these successes, RL has yet to see widescale real-world application”, though they remained optimistic that things would advance rapidly. But certainly the wireless community doesn’t want to be the guinea pig for the first large scale testing of RL.
The Road Already Traveled Has a Ways to Go
Recently data mining has been applied successfully to network operations and though this is related to AI in the kinds of math used and the amount of data required for success it is a more passive technology, providing useful insight through the development of new Key Performance Indicators (KPI) but leaving the decision making finally to a human . The challenge of such data mining is the same as the challenge AI will face in being applied to wireless networks, that is the lack of data available to discover KPIs, or train a NN. The network is already strained moving the customer data around without having to support significant extra load of training data for parameters. This was a problem “solved” by the introduction of Cloud RAN (all the data is now in a central server) and then unsolved by eCPRI (where much of the processing in the physical layer is sent back to the antenna site because we don’t want to fronthaul all of that antenna data into a centralized spot). So one interesting challenge for wireless is the integration of AI into the very edge of the network, and then the development of a system where most of the training can occur locally but the AI system can still behave as an overall intelligent agent for the network making good decisions without much data flow between the training locations.
The Really Scary Stuff
A recent article in RCRWireless  caught my attention because it posited that increasing complexity in 5G networks would make AI a necessity. Essentially they suggest we have built a network protocol that is so complicated that it cannot run efficiently without some sort of intelligent learning in the loop. This would be a very bad sign for the success of 5G. The folklore in the industry that all of the odd Gs are disasters that are saved by the even Gs (see  for a description of this) would be reinforced because 3G was also a challenge to control with its complicated parameter set options and this led to the rise of Self Optimizing Networks (SON) as a significant effort to try to alleviate this problem. Sound familiar? Certainly we do not want to repeat history this quickly.
Additionally, one part of the Berkeley challenge we have not touched on is the issue of Security in Learning systems. Recent research in the AI space has revealed that only subtle changes in training data can cause the trained system to behave in very different ways. The Berkeley paper puts the challenge thusly “Build AI systems that are robust against adversarial inputs both during training and prediction (e.g., decision making), possibly by designing new machine learning models and network architectures, leveraging provenance to track down fraudulent data sources, and replaying to redo decisions after eliminating the fraudulent sources.” Sounds complicated. The RCR article take the AI glass as being half full expecting the AI will produce new and better ways of detecting malicious actors but their focus is more at the core network than the edge.
Let’s be very careful. There is some promise in the value of AI but we need to start from a stable and efficient system and add improvements in an incremental fashion. At the same time, we can dream of, and experiment with, a future in which AI is an intrinsic part of the protocol. In the next year or so it will be interesting to see how AI technology impacts the definition of the next generation of wireless standards. This will be the first sign of how committed the industry will become to AI.
- “AI Enables Network Intelligence ZTE AI White Paper”, June 2018
- “Key Scenarios of Autonomous Driving Mobile Network”, Huawei 2018
Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.