Skip to main content
Publications lead hero image abstract pattern


Written By:

Mehdi Bennis, Associate Professor, University of Oulu

Published: 10 Mar 2020


CTN Issue: March 2020


A note from the editor:

In the 5G standards effort the Ultra Reliable Low Latency Communication (URLLC) channel has proven to be a difficult nut to crack because once all of the application requirements are translated into latency and bit error rate requirements for a communications link they pose a seemingly insurmountable hill, even if we can reach the Shannon limit. Of course, as Gimli would tell you, if you can’t climb a mountain you should go under it; and for this edition of CTN, Mehdi Bennis, our guide through the mines of Moria, gives us some clues as to how to keep moving forward in URLLC, presenting three core concepts; statistical URLLC, non-RF based transmission and communication- control codesign [1]. Maybe there are more? Your suggestions, as always, are welcome.

Alan Gatherer, Editor-in-Chief

Statistical URLLC: Tale of Tails

Mehdi Bennis

Mehdi Bennis

Associate Professor and Head of the Intelligent Connectivity and Networks/Systems Group (ICON)

University of Oulu

URLLC is about extreme and rare events, where the goal is tantamount to characterizing and taming the TAIL distribution of latency/throughput (as opposed to the average-based design as in eMBB). While industry rushed towards system level simulations (In 3GPP reliability is calculated by counting the erroneous packets and the obtained number is divided by the total transmitted packets in the observed period), it is now back to understanding the fundamentals driving the tail behavior of URLLC.  In addition, and quite interestingly two extremes emerge when it comes to URLLC, which further underscores the importance of the codesign. On one hand predicting the rare events (9-nines) requires different modalities since relying on RF data modality alone may not be sufficient due to lack of data, or statistical irrelevance of the data. On the other hand, for some control applications (robotic arm) few consecutive packets can be lost, which suggests that the very stringent 9-nines requirements can be significantly relaxed in some cases! This begs the questions: instead of maximizing communication reliability as in 3GPP, what is the maximum number consecutive packets that can be lost/delayed while ensuring stability and safety of the control application. Unlike the prevailing model-based URLLC ensuring communication reliability when no models are available is a daunting task. This falls under the category of statistical learning whereby reliability must be coupled with sample complexity (how many samples are needed to learn a model for a given target reliability) while being robust to out of sample distributions, noisy data and other aspects such as generalization, dynamics, etc. While confronting the known unknown problem is one side of the coin, combating errors arising from the unknown unknowns, is more difficult to overcome and requires more wireless resources (more tx power, more bandwidth and so forth).

Life Beyond RF: Best of Both Worlds

Extreme/rare event prediction based on RF-only may not only be inefficient in terms of resource utilization (pilot, channel estimation etc.) but also insufficient. This is because no RF data may be available, data acquisition is expensive or as often time is the case, data carries no statistical relevance. In this case, other rich non-RF modalities must be exploited whereby both modalities ca be fused while capturing their pros and cons. Take an RGB-D camera, while it provides rich sensory information “for free”, it is sensitive to occlusion which requires multiple cameras and hence more processing power. On the flipside, RF can help see through walls, and NLOS yet it consumes wireless resources. Hence, how to efficiently fuse multiple modalities is of the essence.  Owing to advances in machine learning and computer vision, this can be a game changer in “wireless”, where while all wireless communication is RF feedback based, visual modalities do not consume wireless resources and can help a drone fly with only vision without exchanging location information, or an autonomous vehicle navigate.  Among the various non-RF modalities, camera images are a suitable candidate. Observing a sequence of depth-image frames enables detecting signs of transitions between LoS and non-LoS conditions in millimeter wave links. However, the captured images ought to be exchanged over wireless links, which may incur huge communication overhead and violating data privacy. With this in mind, a communication-efficient multimodal split learning framework is needed [2]. As shown in Fig. 1 a global model is split into two parts: convolutional neural network (CNN) layers processing depth-camera images and recurrent neural network (RNN) layers combining the CNN outputs with RF signal inputs, which are connected over wireless links. The idea boils down to compressing the CNN output dimension by adjusting the pooling region size, thereby achieving lower communication overhead while preserving more data privacy. Figure 2 highlights the benefits of combining both modalities for an efficient prediction while depicting interesting tradeoffs where more accuracy requires more communication and also settings where Image-based prediction outperforms the RF-based baseline. The 1-pixel image based prediction when combined with RF outperforms RF-only prediction.

Figure 1: Multimodal SL architecture that integrates image and RF signal (Img+RF) features for predicting mmWave received power.
Figure 1: Multimodal SL architecture that integrates image and RF signal (Img+RF) features for predicting mmWave received power.
Figure 2: Test RMSE for different compression and corresponding communication for transmitting forward propagation signals compared to payload size in bits
Figure 2: Test RMSE for different compression and corresponding communication for transmitting forward propagation signals compared to payload size in bits

Communication Control Co-Design: More Than the Sum of Its Parts)

While real-time control over wireless links is a key application in URLLC encompassing platooning, UAV swarming, autonomous vehicles and others, which pose very strict reliability and latency requirements, it is also a domain where some of those very requirements can be relaxed. It is also a domain where non-3GPP requirements such as control Stability, robustness, safety and resiliency are more important. While stability in the presence of a known model dates back to several decades ago (PID controller, system identification) with a rich literature in model predictive control, the bigger challenge is when there is no model and/or the dimensionality of the state space is very large. In these cases, control decisions should be done based on data samples, which prompts the question of how to ensure safety, stability of a closed loop control application when learning from data let alone the fact that these  applications are safety-critical making blackbox ML solutions ill-suited. While control operations help relax some of the connectivity requirements, wireless connectivity is also essential in enabling scalable and robust control systems where decoupling sensors, actuators, and their controllers, allows the control systems to harness dispersed computing power and data generated at the network edge over wireless links.


  1. J. ParkS. SamarakoonH. ShiriM. K. Abdel-AzizT. NishioA. ElgabliM. Bennis Extreme URLLC: Vision, Challenges, and Key Enablers,
  2. Y. KodaJ. ParkM. BennisK. YamamotoT. NishioM. Morikura Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction, IEEE Comm Letter, 2020

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Sign In to Comment