The field of machine learning (ML) has a long and extremely successful history. For example, the idea of using neural networks (NN) for intelligent machines dates back to as early as 1942 when a simple one-layer model was used to simulate the status of a single neuron. ML has shown its overwhelming advantages in many areas, including computer vision, robotics, and natural language processing, where it is normally difficult to find a concrete mathematical model for feature representation. In those areas, ML has proved to be a powerful tool as it does not require a comprehensive specification of the model. Different from the aforementioned ML applications, the development of communications has vastly relied on theories and models, from information theory to channel modelling. These traditional approaches are showing serious limitations, especially in view of the increased complexity of communication networks. Therefore, research on ML applied to communications, especially to wireless communications, is currently experiencing an incredible boom.
This collection of Best Readings focuses on ML in the physical and medium access control (MAC) layers of communication networks. ML can be used to improve each individual (traditional) component of a communication system, or to jointly optimize the entire transmitter or receiver. Therefore, after introducing some popular textbooks, tutorials, and special issues in this collection, we divide the technical papers into the following eight areas:
- Signal Detection
- Channel Encoding and Decoding
- Channel Estimation, Prediction, and Compression
- End-to-End Communications and Semantic Communications
- Resource Allocation
- Distributed and Federated Learning and Communications
- Standardization, Policy, and Regulation
- Selected Topics
Even if ML in communications is still in its infancy, we believe that a growing number of researchers will be dedicated to the related studies and ML will greatly change the way of communication system design in the near future.
Editorial Remarks on Updating
Over the past two years, ML has been widely investigated and applied in communications, as evidenced by many special issues, workshops, and research labs. It is hard and unnecessary to list all publications here. Therefore, we only chose some publications that have already had or will potentially have significant impact. In particular, semantic communications and distributed/federated learning have emerged as active research topics in the past several years. Therefore, we have modified the topic “end-to-end communications” into “end-to-end and semantic communications” and added a new topic, “distributed and federated learning and communications.”
First issued in March 2019 and updated July 2021
Contributors (March 2019 Issue)
Geoffrey Ye Li, Georgia Institute of Technology
Jakob Hoydis, Nokia Bell Labs
Elisabeth de Carvalho, Aalborg University
Alexios Balatsoukas-Stimming, École Polytechnique Fédérale de Lausanne
Zhijin Qin, Queen Mary University of London
Contributors (July 2021 Update)
Geoffrey Ye Li, Imperial College London
Alexios Balatsoukas-Stimming, Eindhoven University of Technology
Zhijin Qin, Queen Mary University of London
Le Liang, Southeast University
Faycal Ait Aoudia, Nokia Bell Labs
Onur Sahin, InterDigital
Matthew C. Valenti
Editor-in-Chief, ComSoc Best Readings
West Virginia University
Morgantown, WV, USA
O. Simeone, A Brief Introduction to Machine Learning for Engineers, Foundations and Trends in Signal Processing, 12(3-4), 200-431, 2018.
Targeted specifically at engineers, this book provides a short introduction into key concepts and methods in machine learning (ML). Starting from first principles, it covers a wide range of topics, such as probabilistic models, supervised and unsupervised learning, graphical models, as well as approximate inference. Numerous reproducible numerical examples are provided to help understand the key ideas, while the well-selected and up-to-date list of references provides good entry points for readers willing to deepen their knowledge in a specific area. Overall, the book is an excellent starting point for engineers to familiarize themselves with the broad area of ML.
C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
This book not only presents developments in the area of machine learning but also provides a comprehensive introduction to the field. No previous knowledge of pattern recognition or machine learning is assumed, and readers only need to be familiar with multivariate calculus, basic linear algebra, and basic probability theory. It is aimed at graduate students, researchers, and practitioners in the area of machine learning, statistics, computer science, and signal processing.
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.
This is a book on deep learning from some of the pioneers of the field. The book starts with background notions on linear algebra and probability theory. The second part discusses a range of neural network architectures that are most commonly used to solve practical problems and gives guidelines on how to use these architectures through practical examples. Finally, in the third part of the book, the authors discuss a wide range of research-related topics in neural networks.
J. Watt, R. Borthani, and A. K. Katsaggelos, Machine Learning Refined: Foundation, Algorithms, and Applications, Cambridge University Press, 2016.
Written by experts in signal processing and communications, this book contains both a lucid explanation of mathematical foundations in machine learning (ML) as well as the practical real-world applications, such as natural language processing and computer vision. It is a perfect resource and an ideal reference for students and researchers. It is also a useful self-study guide for practitioners working in ML, computer science, and signal processing.
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, 1998.
As an introductory book to reinforcement learning (RL), it is one of the main references in the field. It provides a clear and intuitive explanation of the core principles and algorithms in RL, with very useful examples. The recent reedition of the book (August 2018) includes the most recent developments in RL.
F.-L. Luo, Machine Learning for Future Wireless Communications, Wiley-IEEE Press, 2020.
This book covers a wide range of applications of machine learning to wireless communications. It consists of three parts: spectrum intelligence and adaptive resource management, transmission intelligence and adaptive baseband processing, and network intelligence and adaptive system optimization.
R.-S. He and Z.-G. Ding, Applications of Machine Learning in Wireless Communications, IET, 2019.
This book is a collection of chapters from various experts. The covered topics include channel estimation, signal identification, indoor localization, and resource allocation.
- Overviews and Tutorials
C. Jiang, H. Zhang, Y. Ren, Z. Han, K.-C. Chen, and L. Hanzo, “Machine Learning Paradigms for Next-Generation Wireless Networks,” IEEE Wireless Communications, vol. 24, no. 2, pp. 98-105, April 2017.
The article proposes to use machine learning (ML) paradigms to address challenges in the fifth generation (5G) wireless networks. The article first briefly reviews the rudimentary concepts of ML and introduces their compelling applications in 5G networks. With the help of ML, future smart 5G mobile terminals will autonomously access the most meritorious spectral bands with the aid of sophisticated spectral efficiency learning. The transmission protocols in 5G networks can be adaptively adjusted with the aid of quality of service learning/inference. The article assists the readers in refining the motivation, problem formulation, and methodology of powerful ML algorithms in the context of future wireless networks.
T. O'Shea and J. Hoydis, “An Introduction to Deep Learning for the Physical Layer,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 4, pp. 563-575, December 2017.
This paper presents the idea of learning full physical-layer implementations of communication systems with the help of neural network-based autoencoders. The technique is evaluated through simulations in several simple scenarios, such as the AWGN, Rayleigh fading, and two-user interference channels, where state-of-the-art performance is achieved. The paper discusses several situations when and reasons why deep learning can lead to gains with respect to classical model-based approaches, presents some examples showing how expert knowledge can be injected into the neural network architecture, and outlines a list of future research challenges.
L. Liang, H. Ye, and G. Y. Li, “Toward Intelligent Vehicular Networks: A Machine Learning Framework,” IEEE Internet of Things Journal, vol. 6, no. 1, pp. 124-15, February 2019.
This article provides an extensive overview on how to use machine learning to address the pressing challenges of high-mobility vehicular networks. Through learning the underlying dynamics of a vehicular network, better decisions can be made to optimize network performance. In particular, the article discusses employing reinforcement learning to manage the network resources as a promising alternative to prevalent optimization approaches.
M. Ibnkahla, “Applications of Neural Networks to Digital Communications - A Survey,” Elsevier Signal Processing, no. 80, pp. 1185-1215, July 2000.
This classical survey paper provides an excellent overview of research, mostly carried out in the 1990s, on various applications of neural networks to communication systems. It is a good link between past research and future trends in machine learning in communications.
K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep Reinforcement Learning: A Brief Survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26-38, November 2017.
This survey first introduces the principles of deep reinforcement learning (RL) and then presents the main streams of value-based and policy-based methods. It covers most important algorithms in deep RL, including the deep Q-network, trust region policy optimization, and asynchronous advantage actor critic. At the end of the article, several current research areas in the field of deep RL are introduced.
Z. Qin, H. Ye, G. Y. Li, and B. F. Juang, “Deep Learning in Physical Layer Communications,” IEEE Wireless Communications, vol. 26, no. 2, pp. 93-99, April 2019.
This paper introduces a comprehensive framework for intelligent physical layer communications by classifying it into the system with and without the block structure. It shows the power of deep learning to improve the performance of each individual communication block or optimize the transceiver as a whole. Particularly, the paper includes model-driven and data-driven signal compression, signal detection, and end-to-end communications. The article ends with some vital research challenges to be addresses in the future.
A. Balatsoukas-Stimming and C. Studer, “Deep Unfolding for Communications Systems: A Survey and Some New Directions,” in Proc. IEEE International Workshop on Signal Processing Systems (SiPS), October 2019.
Deep unfolding fuses iterative optimization algorithms with tools from neural networks to efficiently solve a range of tasks in machine learning, signal and image processing, and communication systems. This survey summarizes the principle of deep unfolding and discusses its recent use for communication systems with focus on detection and precoding in multi-antenna (MIMO) wireless systems and belief propagation decoding of error-correcting codes. To showcase the efficacy and generality of deep unfolding, the survey describes a range of other tasks relevant to communication systems that can be solved using this emerging paradigm. At the end of the survey, a list of open research problems and future research directions are provided.
M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, “Artificial Neural Networks-Based Machine Learning for Wireless Networks: A Tutorial,” IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3039-3071, Fourth Quarter, 2019.
This paper provides a comprehensive overview on how artificial neural networks (ANNs)-based machine learning algorithms can be employed for solving various wireless networking problems. It presents a detailed overview of a number of key types of ANNs that are pertinent to wireless networking applications. For each type of ANN, the paper presents the basic architecture as well as specific examples that are particularly important for wireless network design. It also provides an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, where the main motivations for using ANNs along with the associated challenges are discussed.
N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim, “Applications of Deep Reinforcement Learning in Communications and Networking: A Survey,” IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3133-3174, Fourth Quarter, 2019.
This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. It gives a tutorial of DRL from fundamental concepts to advanced models and reviews DRL approaches proposed to address emerging issues in communications and networking, which are all important to next generation networks. The paper also presents applications of DRL for traffic routing, resource sharing, and data collection, and highlights important challenges, open issues, and future research directions of applying DRL.
C. Zhang, P. Patras, and H. Haddadi, “Deep Learning in Mobile and Wireless Networking: A Survey,” IEEE Communications Surveys & Tutorials, vol. 21, no. 3, pp. 2224-2287, Third Quarter 2019.
This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. It introduces essential background and state-of-the-art in deep learning techniques with potential applications to networking, and discusses several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. It also provides an encyclopedic review of mobile and wireless networking research based on deep learning and discusses how to tailor deep learning to mobile environments. Finally, current challenges and open future directions for research are introduced.
H. He, S. Jin, C.-K. W, F. Gao, G. Y. Li, and Z. Xu, “Model-Driven Deep Learning for Physical Layer Communications,” IEEE Wireless Communications, vol. 26, no. 5, pp. 77-83, October 2019.
Even if deep learning (DL) has been applied in physical layer communications and has demonstrated an impressive performance improvement in recent years, most existing works focus on data-driven approaches, which consider each module in a communication system as a black box represented by a neural network and train it by using a huge volume of data. On the other hand, model-driven DL approaches combine communication domain knowledge with DL to reduce the demand for computing resources and training time. This article discusses the recent advancements in model-driven DL approaches, especially deep unfolding approaches, in physical layer communications, including transmission schemes, receiver design, and channel information recovery.
D. Gündüz, P. de Kerret, N. D. Sidiropoulos, D. Gesbert, C. R. Murthy, and M. van der Schaar, “Machine Learning in the Air,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2184-2199, October 2019.
This paper reviews some of the major promises and challenges of machine learning (ML) in wireless communication systems, focusing mainly on the physical layer. It presents some of the most striking recent accomplishments that ML techniques have achieved with respect to classical approaches, and points to promising research directions where ML is likely to make the biggest impact in the near future. It also highlights the complementary problem of designing physical layer techniques to enable distributed ML at the wireless network edge, which further emphasizes the need to understand and connect ML with fundamental concepts in wireless communications.
S. Niknam, H. S. Dhillon, and J. H. Reed, “Federated Learning for Wireless Communications: Motivation, Opportunities, and Challenges,” IEEE Communications Magazine, vol. 58, no. 6, pp. 46-51, June 2020.
Due to its privacy-preserving nature, federated learning is particularly relevant to many wireless applications, especially in the context of fifth generation (5G) networks. This article provides an accessible introduction to the general idea of federated learning, discusses several possible applications in 5G networks, and describes key technical challenges and open problems for future research on federated learning in the context of wireless communications.
- Special Issues
“Machine Learning for Cognition in Radio Communications and Radar,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 3-247, February 2018.
“Robust Subspace Learning and Tracking: Theory, Algorithms, and Applications,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 6, December 2018.
“Machine Learning and Data Analytics for Optical Communications and Networking,” IEEE/OSA Journal of Optical Communications and Networking, vol. 10, no. 10 October 2018.
“Artificial Intelligence and Machine Learning for Networking and Communications,” IEEE Journal of Selected Areas in Communications, vol. 37, no. 6, June 2019.
“Machine Learning in Wireless Communication – Part I,” IEEE Journal of Selected Areas in Communications, vol. 37, no. 10, October 2019.
“Machine Learning in Wireless Communication – Part II,” IEEE Journal of Selected Areas in Communications, vol. 37, no. 11, November 2019.
“Leverage Machine Learning in SDN/NFV-based Networks,” IEEE Journal of Selected Areas in Communications, vol. 38, no. 2, February 2020.
“Advances in Artificial Intelligence and Machine Learning for Networking,” IEEE Journal of Selected Areas in Communications, vol. 38, no. 10, October 2020.
“Artificial Intelligence for Cognitive Wireless Communications,” IEEE Wireless Communications, vol. 26, no. 3, June 2019.
“Intelligent Radio: When Artificial Intelligence Meets the Radio Networks” IEEE Wireless Communications, vol. 27, no. 1, February 2020.
“Artificial-Intelligence-Driven Fog Radio Access Networks: Recent Advances and Future Trends,” IEEE Wireless Communications, vol. 27, no. 2, April 2020.
“Edge Intelligence for Beyond 5G Networks,” IEEE Wireless Communications, vol. 28, no. 2, April 2021.
“Machine Learning in Communications and Networks,” IEEE Journal of Selected Areas in Communications Series, vol. 39, January/July/August 2021.
In this section, we introduce several topics in the area of machine learning (ML) in communications. It includes ML based signal detection, channel encoding and decoding, channel estimation, prediction, and compression, and resource allocation, which can directly improve the performance of each individual processing block in traditional communications systems. It also consists of newly developed topics: end-to-end and semantic communications, distributed and federated learning, and related standardization, policy, and regulations. Those papers that do not fall into any of the above topics are listed in the selected topics at the end.
- I. Signal Detection
H. Ye, G. Y. Li, and B.-H. Juang, “Power of Deep Learning for Channel Estimation and Signal Detection in OFDM Systems,” IEEE Wireless Communications Letters, vol. 7, no. 1, pp. 114-117, February 2018.
This paper proposes a deep learning based joint channel estimation and signal detection approach. A deep neural network is trained to recover the transmit data by feeding the received signals corresponding to transmit data and pilots. This method outperforms the minimum mean-squared error method for a system without adequate pilots or cyclic prefix and with nonlinear distortions.
N. Samuel, T. Diskin, and A. Wiesel, “Deep MIMO Detection,” in Proc. IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), July 2017 // Journal version: “Learning to Detect,” IEEE Transactions on Signal Processing, vol. 67, no. 10, pp. 2554-2564, May 2019.
The paper uses deep learning for massive multi-input multi-output (MIMO) detection by unfolding a projected gradient descent method, and applies the approach to time-invariant and time-varying channels. The deep learning algorithm provides lower complexity than approximate message passing and semidefinite relaxation with the same accuracy and enhanced robustness.
N. Farsad and A. Goldsmith, “Neural Network Detection of Data Sequences in Communication Systems,” IEEE Transactions on Signal Processing, vol. 66, no. 21, pp. 5663-5678, November 2018.
This paper describes a bidirectional recurrent neural network for sequence detection in channels with memory. The method does not require knowledge of the channel model. Alternatively, if the channel model is known, it does not require knowledge of the channel state information (CSI). Simulation and experimental results show that the developed method works well and can outperform Viterbi detection in certain scenarios.
H. He, C. Wen, S. Jin, and G. Y. Li, “Model-Driven Deep Learning for MIMO Detection,” IEEE Transactions on Signal Processing, vol. 68, pp. 1702-1715, 2020.
This paper investigates the model-driven deep learning (DL) for MIMO detection by unfolding an iterative algorithm (orthogonal approximate message passing) and adding some trainable parameters. Since the number of trainable parameters is much fewer than that of the data-driven DL based signal detector, the model-driven DL based MIMO detector can be rapidly trained with a much smaller data set. The proposed MIMO detector can be extended to soft-input soft-output detection easily, outperforms other DL-based MIMO detectors, and exhibits superior robustness to various mismatches.
N. Shlezinger, N. Farsad, Y. C. Eldar, and A. J. Goldsmith, “ViterbiNet: A Deep Learning Based Viterbi Algorithm for Symbol Detection,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3319-3331, May 2020.
This paper combines the Viterbi algorithm with deep learning by identifying parts of the Viterbi algorithms that rely on assumptions on the channel model and replacing them with neural networks. The main structure of the algorithm is kept unchanged. To enable tracking of the channel aging, a meta learning algorithm is proposed to perform online training based on recent decisions of the detector. The proposed detector does not require knowledge of the channel state information (CSI), but is shown to achieve similar performance to the conventional Viterbi algorithm with perfect CSI, and to be able to track the channel changes.
M. Khani, M. Alizadeh, J. Hoydis, and P. Fleming, “Adaptive Neural Signal Detection for Massive MIMO,” IEEE Transactions on Wireless Communications, vol. 19, no. 8, pp. 5635-5648, August 2020.
This paper introduces an iterative algorithm for symbol detection in MU-MIMO systems that relies on soft thresholding. Each iteration includes trainable parameters, which are optimized for each channel realization using a training algorithm that leverages temporal and spectral correlation to accelerate the training process. By updating the trainable parameters for each channel realization, the receiver tracks the channel.
- II. Channel Encoding and Decoding
N. Farsad, M. Rao, and A. Goldsmith, “Deep Learning for Joint Source-Channel Coding of Text,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, Canada, April 2018.
This paper addresses joint source and channel coding of structured data, such as natural language, over a noisy channel. The typical approach to this problem is optimal in terms of minimizing end-to-end distortion only when both the source and channel have arbitrarily long block lengths, which is not necessarily optimal for finite-length documents or encoders. This paper demonstrates that, in this scenario, a deep learning based encoder and decoder can achieve lower word-error rates.
T. Gruber, S. Cammerer, J. Hoydis, and S. ten Brink, “On Deep Learning-based Channel Decoding,” in Proc. Information Sciences and Systems (CISS), Baltimore, USA, March 2017.
The authors of this paper use neural networks to learn decoders for random and structured codes, such as polar codes. The key observations are that (i) optimal bit-error rate performance for both code families and short codeword lengths can be achieved, (ii) structured codes are easier to learn, and (iii) the neural network is able to generalize to codewords that it has never seen during training for the structured codes, but not for the random codes. Scaling to long codewords is identified as the main challenge for neural network-based decoding due to the curse of dimensionality.
E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and Y. Be’ery, “Deep Learning Methods for Improved Decoding of Linear Codes,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 119-131, February 2018.
The paper applies deep learning to the decoding of linear block codes with short to moderate block length based on a recurrent neural network architecture. The methods show advantages in complexity and performance in the belief propagation and min-sum algorithms.
Y. Jiang, H. Kim, H. Asnani, S. Kannan, S. Oh, and P. Viswanath, “LEARN Codes: Inventing Low-Latency Codes via Recurrent Neural Networks,” IEEE Journal on Selected Areas in Information Theory, vol. 1, no. 1, pp. 207-216, May 2020.
This paper uses deep learning techniques to construct error-correcting codes by jointly designing a neural network encoder and decoder structure. The authors show that the constructed codes can outperform existing convolution codes, although further research is required to outperform more advanced codes, such as low-density parity-check (LDPC) codes. The authors also construct codes with decoding latency constraints and they show that these codes are robust to channel mismatches.
I. Be’Ery, N. Raviv, T. Raviv, and Y. Be’Ery, “Active Deep Decoding of Linear Codes,” IEEE Transactions on Communications, vol. 68, no. 2, pp. 728 – 736, February 2020.
This paper considers the training procedure for weighted belief propagation (BP) decoding. In particular, the authors propose several metrics that can be used to construct better training sets for neural decoders. This carefully constructed training set is shown to improve the error-correcting performance of learned weighted BP decoders at no additional decoding complexity cost.
A. Buchberger, C. Häger, H. D. Pfister, L. Schmalen, and A. Graell I Amat, “Pruning and Quantizing Neural Belief Propagation Decoders,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 7, pp. 1957 – 1996, July 2021.
This paper uses deep learning techniques to prune overcomplete parity-check matrices for near-maximum-likelihood decoding of short linear block codes. The authors also consider the joint optimization of quantization parameter and offsets in offset min-sum decoding with pruned overcomplete graphs. The results show improved performance compared to standard neural offset min-sum decoding and with reduced computational complexity.
- III. Channel Estimation, Prediction, and Compression
H. He, C.-K. Wen, S. Jin, and G. Y. Li, “Deep Learning-Based Channel Estimation for Beamspace mmWave Massive MIMO Systems,” IEEE Wireless Communications Letters, vol. 7, no. 5, pp. 852-855, October 2018.
This paper develops a deep learning-based channel estimation network for beam-space millimeter-wave massive multi-input multi-output (MIMO) systems. A neural network is used to learn the channel structure and estimate the channel from a large amount of training data. It provides an analytical framework on the asymptotic performance of the channel estimator. Results demonstrate that that the neural network significantly outperforms state-of-the-art compressed sensing-based algorithms even when the receiver is equipped with a small number of RF chains.
C.-K. Wen, W.-T. Shih, and S. Jin, “Deep Learning for Massive MIMO CSI Feedback,” IEEE Wireless Communications Letters, vol. 7, no. 5, pp. 748-751, October 2018.
This article develops a novel channel state information (CSI) sensing and recovery mechanism using deep learning. The new approach learns to exploit channel structure effectively from training samples and transforms CSI to a near-optimal number of representations/codewords. The proposed approach can recover CSI with significantly improved reconstruction performance compared to the existing compressive sensing (CS)-based methods, even at excessively low compression regions where the traditional CS-based methods fail.
Y. Wang, M. Narasimha, and R. W. Heath, Jr., “MmWave Beam Prediction with Situational Awareness: A Machine Learning Approach,” in Proc. IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Kalamata, Greece, June 2018.
This article combines machine learning tools and situational awareness to learn the beam information, e.g., received power and the optimal beam index, in millimeter-wave communication systems. It uses the vehicle locations as features to predict the received power of any beam in the beam codebook and shows that situational awareness can largely improve the prediction accuracy. The method requires almost no overhead and can achieve high throughput with only a small performance degradation.
D. Neumann, T. Wiese, and W. Utschick, “Learning the MMSE Channel Estimator,” IEEE Transactions on Signal Processing, vol. 11, no. 66, pp. 2905-2917, June 2018.
This paper addresses the problem of estimating Gaussian random vectors with random covariance matrices. The authors develop a neural network architecture inspired from expert knowledge about the covariance matrix structure, which achieves state-of-the-art performance with an order-of-magnitude less complexity. This is a good example of how expert knowledge can be combined with machine learning methods to outperform purely model-based approaches.
M. Soltani, V. Pourahmadi, A. Mirzaei, and H. Sheikhzadeh, “Deep Learning-Based Channel Estimation,” IEEE Communications Letters, vol. 23, no. 4, pp. 652- 655, April 2019.
This paper addresses the problem of channel estimation using some known values at the pilot locations. The authors develop two neural networks inspired by deep image processing techniques, e.g., image super-resolution (SR) and image restoration (IR). The proposed technique maps a limited number of pilots as a low-resolution image to predict the high-resolution channel estimation information.
E. Balevi and J. G. Andrews, “One-Bit OFDM Receivers via Deep Learning,” IEEE Transactions on Communications, vol. 67, no. 6, pp. 4326 - 4336, June 2019.
The authors propose a novel deep learning-based strategy for an orthogonal frequency division multiplexing (OFDM) receiver under the constraint of one-bit complex quantization. First, a generative supervised deep neural network is employed for channel estimation. Then, an autoencoder jointly learns the precoder and the decoder for data symbol detection. It is demonstrated that this deep learning method outperforms the traditional unquantized OFDM.
P. Dong, H. Zhang, G. Y. Li, I. S. Gaspar, and N. NaderiAlizadeh, “Deep CNN-Based Channel Estimation for mmWave Massive MIMO Systems,” IEEE Journal of Selected Topics in Signal Processing, vol. 13, no. 5, pp. 989-1000, September 2019.
For millimeter wave (mmWave) massive MIMO systems, hybrid processing is normally used to reduce the complexity and cost, which however poses a very challenging issue in channel estimation. In this paper, a deep convolutional neural network (CNN) is employed to address this issue. A spatial-frequency CNN (SF-CNN) based channel estimation is first proposed to exploit both spatial and frequency correlations, where the corrupted channel matrices at adjacent subcarriers are input into the CNN simultaneously. Then, a spatial-frequency-temporal CNN (SFT-CNN) is developed to further improve the accuracy on estimating time-varying channels.
Y. Jin, J. Zhang, S. Jin, and B. Ai, “Channel Estimation for Cell-Free mmWave Massive MIMO Through Deep Learning,” IEEE Transactions on Vehicular Technology, vol. 68, no. 10, pp. 10325 -10329, October 2019.
This paper develops a deep learning-based channel estimation network for cell-free millimeter-wave massive multi-input multi-output (MIMO) systems. A de-noising neural network is used to learn different noise maps and improves the accuracy of channel estimation information with only one model. It is demonstrated that the training speed of the neural network significantly outperforms the state-of-art channel estimators.
C. Luo, J. Ji, Q. Wang, X. Chen, and P. Li, “Channel State Information Prediction for 5G Wireless Communications: A Deep Learning Approach,” IEEE Transactions on Network Science and Engineering, vol. 7, no. 1, pp. 227-236, 1 Jan.-March 2020.
This article develops a novel channel state information (CSI) prediction framework using deep learning. The authors identify several important features affecting the CSI. The new approach learns to predict CSI effectively from historical data in 5G wireless communication systems and generate more stable CSI predictions with a two-stage learning algorithm. The proposed approach not only obtains the predicted CSI values very quickly but also achieves highly accurate CSI prediction.
Q. Hu, F. Gao, H. Zhang, S. Jin, and G. Y. Li, “Deep Learning for Channel Estimation: Interpretation, Performance, and Comparison,” IEEE Transactions on Wireless Communications, vol. 20, no. 4, pp. 2398-2412, April 2021.
Even with unprecedented success in communication, DL methods are often regarded as black boxes and lack of explanation on their internal mechanisms, which severely limits their further improvement and extension. This paper presents preliminary theoretical analysis on DL based channel estimation for single-input multiple-output (SIMO) systems to understand and interpret its internal mechanisms. It is demonstrated in this paper that DL based channel estimation does not restrict to any specific signal model and asymptotically approaches the minimum mean-squared error (MMSE) estimation in various scenarios without requiring any prior knowledge of channel statistics, which explains why DL based channel estimation outperforms or is at least comparable with traditional channel estimation, depending on the types of channels.
- IV. End-to-End Communications and Semantic Communications
S. Dörner, S. Cammerer, J. Hoydis, and S. ten Brink, “Deep Learning-Based Communication over the Air,” IEEE Journal Selected Topics in Signal Processing, vol. 12, no. 1, pp. 132-143, February 2018.
This paper reports the world’s first implementation of a fully neural network-based communication system using software-defined radios. The authors identify the “missing channel gradient” as the biggest obstacle in training such systems over actual channels and propose a workaround based on model-based training in simulations followed by receiver-finetuning on measured data. Their implementation comes close to, but does not outperform, a well-designed baseline. A special neural network structure for the task of synchronization to single-carrier waveforms is introduced.
H. Ye, G. Y. Li, B.-H. Juang, and K. Sivanesan, “Channel Agnostic End-to-End Learning based Communication Systems with Conditional GAN,” in Proc. 2018 IEEE Global Communication Conference (Globecom), December 2018 // Journal version: “Deep Learning-Based End-to-End Wireless Communication Systems With Conditional GANs as Unknown Channels,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3133-3143, May 2020.
This paper employs a conditional generative adversarial net (GAN) to build an end-to-end communication system without a channel model, where deep neural networks (DNNs) represent both the transmitter and the receiver. The conditional GAN learns to generate the channel effects and acts as a bridge for the gradients to pass through in order to jointly train and optimize both the transmitter and the receiver DNNs.
F. Ait Aoudia and J. Hoydis, “End-to-End Learning of Communications Systems Without a Channel Model,” in Proc. IEEE Asilomar Conference on Signal, System, Computers, October 2018. // Journal version: “Model-Free Training of End-to-End Communication Systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 11, pp. 2503-2516, November 2019.
The authors provide a solution to the problem of training autoencoder-based communication systems over actual channels without any channel model. The key idea is to estimate the channel gradient using the technique of policy gradients from the field of reinforcement learning. They show through simulations that this approach works as well as model-based learning for the AWGN and Rayleigh channels.
B. Karanov, M. Chagnon, F. Thouin, T. A. Eriksson, H. Bülow, D. Lavery, P. Bayvel, and L. Schmalen, “End-to-End Deep Learning of Optical Fiber Communications,” Journal of Lightwave Technology, vol. 36, no. 20, pp. 4843-4855, October 2018.
This paper uses an autoencoder to learn transmitter and receiver neural networks for use in optical communications. The experimental results show that the autoencoder can effectively learn to deal with nonlinearities and that it provides good performance for different link dispersions.
E. Bourtsoulatze, D. Burth Kurka, and D. Gündüz, “Deep Joint Source-Channel Coding for Wireless Image Transmission,” IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 3, pp. 567-579, September 2019.
This article designs a joint source and channel coding (JSCC) system for wireless image transmission without traditional codecs, in which the input images are mapped into the channel inputs directly. The encoder and decoder are parameterized by two convolutional neural networks (CNNS), which are trained jointly to minimize the average mean-squared error (MSE). The experimental results show that JSCC outperforms digital transmission concatenating JPEG or JPEG2000 compression at low SNRs and channel bandwidth values in the presence of AWGN.
S. Cammerer, F. Ait Aoudia, S. Dörner, M. Stark, J. Hoydis, and S. ten Brink, “Trainable Communication Systems: Concepts and Prototype,” IEEE Transactions on Communications, vol. 68, no. 9, pp. 5489-5503, September 2020.
This paper considers autoencoder-based point-to-point communication systems and demonstrates that training on the bit-wise mutual information enables seamless integration with practical bit-metric decoding receivers. A neural iterative demapping and decoding receiver architecture is also proposed and jointly optimized with a constellation geometry and bit labeling using end-to-end learning. The LDPC codes on top of the learned end-to-end system are also designed to achieve further gains. Finally, the viability of the proposed approach is demonstrated by performing over-the-air training of the proposed end-to-end system.
H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep Learning Enabled Semantic Communication Systems,” IEEE Transactions on Signal Processing, vol. 69, pp. 2663-2675, May 2021.
This paper provides a new view on communication systems from the semantic level. The authors identify the concept and the main obstacles in designing such systems and employ Transformer to build an end-to-end semantic communication system, named DeepSC, where semantic information is delivered instead of bit-sequence recovery. The experimental results show that the proposed DeepSC is more robust to channel variations and is able to achieve better performance than the typical communication systems, especially in the low signal-to-noise (SNR) regime.
- V. Resource Allocation
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to Optimize: Training Deep Neural Networks for Interference Management,” IEEE Transactions on Signal Processing, vol. 66, no. 20, pp. 5438-5453, October 2018.
This paper exploits deep neural networks (DNNs) to address optimization and interference management issues. The input and output of a signal-processing (SP) algorithm is treated as an unknown nonlinear mapping and is approximated by a DNN. It demonstrates that SP tasks can be performed effectively for those optimization problems that can be learned accurately by a DNN of moderate size. The paper also identifies a class of optimization algorithms that can be solved by a moderate size DNN and then uses interference management algorithm as an example to demonstrate the effectiveness of the proposed approach
V. Va, J. Choi, T. Shimizu, G. Bansal, and R. W. Heath, Jr., “Inverse Multipath Fingerprinting for Millimeter Wave V2I Beam Alignment,” IEEE Transactions on Vehicular Technology, vol. 67, no. 5, pp. 4042-4058, May 2018.
This paper uses multipath fingerprinting to address the beam alignment problem in millimeter wave vehicle-to-infrastructure communications. Based on the vehicle's position (e.g., available via GPS), the multipath fingerprint/signature is first obtained from a database and provides prior knowledge of potential pointing directions for reliable beam alignment, which can be regarded as the inverse of fingerprinting localization. From the extensive simulation results, the proposed approach provides increasing rates with larger antenna arrays while IEEE 802.11ad has decreasing rates due to the higher beam training overhead.
U. Challita, L. Dong, and W. Saad, “Proactive Resource Management for LTE in Unlicensed Spectrum: A Deep Learning Perspective,” IEEE Transactions on Wireless Communications, vol. 17, no. 7, pp. 4674-4689, July 2018.
This paper develops a deep learning-based resource allocation framework for coexistence of long term evolution (LTE) networks with licensed assisted access (LTE-LAA) and WiFi in the unlicensed spectrum. With long short-term memory, each small-cell base station is able to decide on its spectrum allocation autonomously by requiring only limited information on the network state.
R. Daniels and R. W. Heath, Jr., “An Online Learning Framework for Link Adaptation in Wireless Networks,” in Proc. Information Theory and Applications Workshop (ITA), San Diego, USA, February 2009.
This paper is one of the earliest works on machine learning (ML) for link adaptation. The motivation for using ML lies in the difficulty to model the impairments in wireless communications (nonlinearities, interference). It uses real-time measurements to build and continuously adapt a classification procedure based on k-nearest neighbor. Follow-up work relies on support vector machines (SVMs).
S. Wang, H. Liu, P.H. Gomes, and B. Krishnamachari, “Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks,” IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 2, pp. 257-265, February 2018.
This paper considers a dynamic multichannel access problem, where multiple correlated channels follow an unknown joint Markov model and users select the channel to transmit data. The work applies the concept of reinforcement learning and implements a deep Q-network (DQN) to maximize successful transmissions. An adaptive DQN approach with the capability is finally proposed to adapt its learning in time-varying scenarios.
O. Naparstek and K. Cohen, “Deep Multi-User Reinforcement Learning for Distributed Dynamic Spectrum Access,” IEEE Transactions on Wireless Communications, vol. 18, no. 1, pp. 310-323, November 2018.
This paper develops a novel distributed dynamic spectrum access algorithm based on deep multi-user reinforcement leaning for network utility maximization in multichannel wireless networks. Game theoretic analysis of the system dynamics is developed for establishing design principles for the implementation of the algorithm. The experimental results demonstrate the strong performance of the algorithm.
H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep Reinforcement Learning Based Resource Allocation for V2V Communications,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, April 2019.
This paper develops a novel decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning. In the proposed method, an autonomous “agent,” a V2V link or a vehicle, makes its decisions to find the optimal sub-band and power level for transmission in a distributed manner, thus incurring very limited transmission overhead. From the simulation results, each agent can effectively learn to satisfy the stringent latency constraints on V2V links while minimizing the interference to vehicle-to-infrastructure communications.
M. Eisen, C. Zhang, L. F. O. Chamon, D. D. Lee, and A. Ribeiro, “Learning Optimal Resource Allocations in Wireless Systems,” IEEE Transactions on Signal Processing, vol. 67, no. 10, pp. 2775-2790, May 2019.
This paper develops learning-based design for optimal resource allocation in wireless communication systems, where the training is undertaken in the dual domain to handle stochastic constraints. It is shown that this can be done with small loss of optimality when using near-universal learning parameterizations. In particular, since deep neural networks (DNNs) are near universal, their use is advocated and explored. DNNs are trained here with a model-free primal-dual method that simultaneously learns a DNN parameterization of the resource allocation policy and optimizes the primal and dual variables.
W. Cui, K. Shen, and W. Yu, “Spatial Deep Learning for Wireless Scheduling,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1248-1261, June 2019.
This paper proposes to schedule wireless links by unsupervised training over randomly deployed networks and using a novel neural network architecture that computes the geographic spatial convolutions of the interfering or interfered neighboring nodes along with subsequent multiple feedback stages to learn the optimum solution. To provide fairness, it also develops a novel scheduling approach that utilizes the sum-rate optimal scheduling algorithm over judiciously chosen subsets of links for maximizing a proportional fairness objective over the network.
L. Liang, H. Ye, and G. Y. Li, “Spectrum Sharing in Vehicular Networks Based on Multi-Agent Reinforcement Learning,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2282-2292, October 2019.
This paper addresses spectrum sharing in vehicular networks using multi-agent reinforcement learning, where multiple vehicle-to-vehicle (V2V) links reuse the frequency spectrum preoccupied by vehicle-to-infrastructure (V2I) links. It is solved with a fingerprint-based deep Q-network method that is amenable to a distributed implementation. Simulation results demonstrate that with a proper reward design and training mechanism, the multiple V2V agents successfully learn to cooperate in a distributed way to simultaneously improve the sum capacity of V2I links and payload delivery rate of V2V links.
Y. S. Nasir and D. Guo, “Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2239-2250, October 2019.
This paper develops a distributively executed dynamic power allocation scheme based on model-free deep reinforcement learning. The objective is to maximize a weighted sum-rate utility function, which can be particularized to achieve maximum sum-rate or proportionally fair scheduling. For a typical network architecture, the proposed algorithm is shown to achieve near-optimal power allocation in real time based on delayed channel state information (CSI) measurements available to the agents. The proposed scheme is especially suitable for practical scenarios where the system model is inaccurate and CSI delay is non-negligible.
F. Liang, C. Shen, W. Yu, and F. Wu, “Towards Optimal Power Control via Ensembling Deep Neural Networks,” IEEE Transactions on Communications, vol. 68, no. 3, pp. 1760-1776, March 2020.
The paper develops three deep neural network (DNN) based power control methods, namely, PCNet, PCNet(+), and ePCNet(+), that solve the non-convex optimization problem of maximizing the sum rate of a fading multi-user interference channel. To address the issue of lacking ground truth, the proposed methods leverage the unsupervised learning strategy and directly maximize the sum rate in the training phase. Simulation results show that the proposed methods outperform state-of-the-art power control solutions under a variety of system configurations. Furthermore, the performance improvement of ePCNet comes with a reduced computational complexity.
- VI. Distributed and Federated Learning and Communications
N. H. Tran, W. Bao, A. Zomaya, M. N. H. Nguyen, and C. S. Hong, “Federated Learning over Wireless Networks: Optimization Model Design and Analysis,” in Proc. IEEE Conference on Computer Communications (INFOCOM), May 2019.
This paper investigates federated learning (FEDL) over wireless networks and addresses two under-explored trade-offs, i.e., learning time versus users’ energy consumption and computation versus communication latency. The FEDL is formulated as an optimization problem and each mobile device computes local learning tasks and transmits the local update in a time-sharing fashion. The FEDL can then be optimally solved by exploiting the problem structure.
S.-Q. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive Federated Learning in Resource Constrained Edge Computing Systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205 – 1221, June 2019.
This paper focuses on the gradient decent based federated learning, which involves local and global update. By analyzing the distributed gradient descent algorithm, a convergence bound is obtained, based on which a control algorithm is proposed to dynamically adapt the frequency of aggregation so that the loss function is minimized under some resource constraints.
H. H. Yang, Z.-Z. Liu, T. Q. S. Quek, and V. H. Poor, “Scheduling Policies for Federated Learning in Wireless Networks,” IEEE Transactions on Communications, vol. 68, no. 1, pp. 317 – 333, January 2020.
This paper proposes a training algorithm for federated learning (FL) and analyzes the convergence performance of the training process in wireless setting. Various system parameters are considered in the analytical model, e.g., the transmission scheduling policy, small-scale fading, large-scale path loss, and inter-cell interference. Particularly, the convergence rates of FL with three different scheduling policies, random scheduling, round robin, and proportional fair, are analyzed. Based on the paper, with a high SINR decoding threshold, the proportional fair policy renders best performance while the random robin is suitable for the low SINR decoding threshold case.
G.-X. Zhu, Y. Wang, and K.-B. Huang, “Broadband Analog Aggregation for Low-Latency Federated Edge Learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 491- 506, January 2020.
This paper quantifies the effects of broadband analog aggregation (BAA) on the performance of federated learning in a single cell random network. By employing BAA, all devices can simultaneously transmit the updates over broadband channels and the updates will be aggregated “over the air” due to the waveform-superposition property of a multi-access channel. A framework for BAA is presented for low-latency federated edge learning and two important trade-offs between the update reliability and the expected update truncation signal-to-noise ratio (SNR) and between the receive SNR and fraction of data exploited in the learning process are addressed
S. Samarakoon, M. Bennis, W. Saad, and M. Debbah, “Distributed Federated Learning for Ultra-Reliable Low-Latency Vehicular Communications,” IEEE Transactions on Communications, vol. 68, no. 2, pp. 1146 – 1159, February 2020.
This paper considers the problem of joint power and resource allocation (JPRA) for ultra-reliable low-latency communication (URLLC) in vehicular networks. Federated learning is employed to help the vehicular users estimate the tail distribution of the network-wide queues locally without sharing the actual data samples. Furthermore, a Lyapunov-based distributed JPRA procedure is proposed for vehicular users.
K. Yang, T. Jiang, Y.-M. Shi, and Z. Ding, “Federated Learning via Over-the-Air Computation,” IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 2022 – 2035, March 2020.
This paper proposes an over-the-air computation-based approach for fast global model aggregation to train the federated learning model by joint device selection and beamforming design. This joint design problem is formulated as a sparse and low-rank optimization problem and is then solved by the proposed difference-of-convex-functions algorithm.
M. M. Amiri and D. Gündüz, “Machine Learning at the Wireless Edge: Distributed Stochastic Gradient Descent Over-the-Air,” IEEE Transactions on Signal Processing, vol. 19, no. 5, pp. 2155 – 2169, March 2020.
This paper focuses on the distributed stochastic gradient descent (DSGD) of federated learning at the wireless edge where power and bandwidth of devices are both limited. For digital DSGD (D-DSGD), gradient quantization and error accumulation are employed to compress the gradients while for analog DSGD (A-DSGD), gradients are first compressed locally and then all transmitted to the parameter server simultaneously by utilizing the over the air gradient computation. Due to the higher efficiency in using the limited bandwidth, the A-DSGD converges much faster than the D-DSGD in the simulation.
M. M. Amiri and D. Gündüz, “Federated Learning Over Wireless Fading Channels,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3546 – 3557, May 2020.
This paper investigates distributed stochastic gradient descent (DSGD) based federated learning at the wireless edge. Various techniques for gradient compression and different schemes, including the digital DSGD (D-DSGD) scheme and the compressed analog DSGD (CA-DSGD) scheme, are proposed to implement the DSGD. The numerical result shows that the CA-DSGD scheme has clear advantages over the D-DSGD and is robust against the imperfect channel state information at the devices.
M.-Z. Chen, Z.-H. Yang, W. Saad, C.-C. Yin, V. H. Poor, and S-G. Cui, “A Joint Learning and Communications Framework for Federated Learning Over Wireless Networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 1, pp. 269 – 283, January 2021.
This paper studies training federated learning (FL) over a realistic wireless network. The joint training, wireless resource allocation, and user selection problem is formulated as an optimization problem. An expression on convergence rate is first derived to reflect the effects of the parameters of the wireless system on the FL. Then, an algorithm is developed to optimize the user selection and uplink resource block allocation, as well as the transmit power for each user to solve the optimization problem.
Z.-H. Yang, M.-Z. Chen, W. Saad, C.-S. Hong, and M. Shikh-Bahaei, “Energy Efficient Federated Learning Over Wireless Communication Networks,” IEEE Transactions on Wireless Communications, vol. 20, no. 4, pp. 2457 – 2471, April 2021.
This paper studies the energy transmission and computation resource allocation problem for federated learning (FL) over wireless communication systems. A distributed FL training algorithm is first proposed and then the convergence rate of the training algorithm is derived, which is related to the system settings, e.g., the power and computation resource allocation. Then, based on this convergence rate, an optimization problem is formulated to minimize the total energy consumption of the system under a latency constraint and an iterative algorithm is proposed to solve this optimization problem.
- VII. Standardization, Policy, and Regulation
S. Han S, T. Xie, I. Chih-Lin, L. Chai, Z. Liu, Y. Yuan, and C. Cui, “Artificial-Intelligence-Enabled Air Interface for 6G: Solutions, Challenges, and Standardization Impacts,” IEEE Communications Magazine, vol. 58, no. 10, pp. 73-79, November 2020.
This paper investigates the potential role of automation and artificial-intelligence based techniques in next generation cellular technologies. A radio-access-network (RAN) architecture in which a RAN AI controller and RAN AI scheduler coordinate core RRC, RRM, and MAC procedures are described. The interfaces between the AI modules and network elements that are deemed necessary for standardization are detailed. Regarding a possible standardization of AI based physical layer, the paper argues a progressive approach with initially some key physical layer modules, i.e. reference signals, frame structures, etc. being hand-engineered while others, i.e. modulation, massive MIMO operations, are automated through machine learning techniques.
R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang, “Intelligent 5G: When Cellular Networks Meet Artificial Intelligence,” IEEE Wireless communications, vol. 24, no. 5, pp. 175-183, March 2017.
This paper explores and proposes artificial intelligence-based solutions in further enhancing the system flexibility and use-case heterogeneity of 5G networks. AI driven potential extensions to intelligent operations in 5G radio resource management, mobility management, management and orchestration, and service driven management are exemplified. The paper also proposes a high-level 5G network architecture with an AI center serving as the central control and coordination unit between radio access network, core network, and the data traffic services.
C. X. Wang, M. Di Renzo, S. Stanczak, S. Wang, and E.G. Larsson, “Artificial Intelligence Enabled Wireless Networking for 5G and Beyond: Recent Advances and Future Challenges,” IEEE Wireless Communications, vol. 27, no. 1, pp. 16-23, March 2020.
This paper provides a comprehensive overview of ML-based solutions that are highly relevant to the design of beyond-5G communication networks. The paper focuses on the challenging design topics where ML techniques are considered as powerful toolsets, including channel measurement and modelling, large-scale wireless sensing, massive MIMO pilot design, localization, and resource orchestration of ultra-dense and diverse future network deployments. It also gives a summary of ongoing efforts in the standardization and global regulatory bodies, primarily 3GPP and ITU-T, with focus on application of ML/AI solutions to beyond-5G system design.
- VIII: Selected Topics
T. J. O’Shea, T. Roy, and T. C. Clancy, “Over-the-Air Deep Learning Based Radio Signal Classification,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 168-179, February 2018.
In this paper, the widely studied problem of modulation classification is revisited using a neural network operating on raw IQ samples. The authors demonstrate that neural networks can outperform the best-known alternative methods based on expert features for several realistic datasets obtained from over-the-air measurements and simulations.
X. Wang, L. Gao, S. Mao, and S. Pandey, “CSI-Based Fingerprinting for Indoor Localization: A Deep Learning Approach,” IEEE Transactions on Vehicular Technology, vol. 66, no. 1, pp. 763-776, January 2017.
The paper presents an indoor localization method based on deep learning (DL). The DL algorithm exploits channel state information in the frequency domain (amplitude and phase) from three distant antennas for an indoor OFDM system. Location is mapped to fingerprints that are the optimal weights of the deep learning network. Training and testing are performed with experimental data.
M. Kim, N.-I. Kim, W. Lee, and D.-H. Cho, “Deep Learning-Aided SCMA,” IEEE Communications Letters, vol. 22, no. 4, pp. 720-723, April 2018.
This article uses machine learning to design sparse code multiple access (SCMA) schemes. Deep neural networks (DNNs) are used to adaptively construct a codebook and learn a decoding strategy to minimize bit-error rate for SCMA. The proposed deep learning based SCMA can provide improved spectral efficiency and massive connectivity, which is a promising technique for 5G wireless communication systems.
C. Studer, S. Medjkouh, E. Gönültaş, T. Goldstein, and O. Tirkkonen, “Channel Charting: Locating Users within the Radio Environment Using Channel State Information,” IEEE Access, vol. 6. pp. 47682-47698, August 2018.
This paper uses passively collected channel state information (CSI) in conjunction with autoencoder neural networks and other machine learning methods in order to perform relative localization of users within a cell. Extensive simulation results demonstrate that the various proposed methods are able to successfully preserve the local geometry of users and that the autoencoder performs particularly well.
J. Vieira, E. Leitinger, M. Sarajlic, X. Li, and F. Tufvesson, “Deep Convolutional Neural Networks for Massive MIMO Fingerprint-Based Positioning,” in Proc. IEEE International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Montreal, Canada, October 2017.
The paper exploits the sparsity of the channel in the angular domain in a massive multi-input multi-output (MIMO) system to build a map between user position and channel angular pattern. To learn the mapping, the paper uses a convolutional neural network that is trained using measured and simulated channels.
A. Balatsoukas-Stimming, “Non-Linear Digital Self-Interference Cancellation for In-Band Full-Duplex Radios using Neural Networks,” in Proc. IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Kalamata, Greece, June 2018.
This paper uses neural networks in order to perform nonlinear self-interference cancellation in in-band full-duplex communications. Experimental results show that the neural network can achieve similar performance with a state-of-the-art memory polynomial cancellation method, but with significantly lower computational complexity.
C. Häger and H. D. Pfister, “Nonlinear Interference Mitigation via Deep Neural Networks,” in Proc. Optical Fiber Communications Conference and Exposition (OFC), San Diego, USA, March 2018.
This paper uses a deep neural network to mitigate nonlinear effects in optical communications. Inspired by an existing nonlinearity compensation algorithm, the developed neural network structure gives simple and natural choices for the neural network hyperparameters.
J. Liu, R. Deng, S. Zhou, and Z. Niu, "Seeing the Unobservable: Channel Learning for Wireless Communication Networks," in Proc. IEEE Global Communications Conference (GLOBECOM), San Diego, USA, December 2015.
This is an initial work on channel learning. It develops a novel framework to infer unobservable CSI from the observable ones. In particular, it proposes a neural-network-based algorithm for the cell selection in multi-tier networks. Simulations show that the average cell-selection accuracy of the proposed algorithm, which requires no genuine location information, has a 3.9% loss compared to the location-aided algorithm, which requires genuine location.
S. Chen, Z. Jiang, J. Liu, R. Vannithamby, S. Zhou, Z. Niu, and Y. Wu, “Remote Channel Inference for Beamforming in Ultra-Dense Hyper Cellular Network,” in Proc. IEEE Global Communications Conference (GLOBECOM), Singapore, December 2017.
This paper proposes a learning-based channel estimation method for coordinated beamforming in ultra-dense networks. It first shows that the channel state information of geographically separated base stations (BSs) exhibits strong non-linear correlations. Then, an artificial neural network is used to remotely infer the quality of different beamforming patterns at a dense-layer BS. Moreover, by involving more candidate beam patterns, a joint learning scheme for multiple BSs is developed to further improve the performance.
N. Ye, X. Li, H. Yu, L. Zhao, W. Liu, and X. Hou, “DeepNOMA: A Unified Framework for NOMA using Deep Multi-Task Learning,” IEEE Transactions on Wireless Communications, vol. 19, no. 4, pp. 2208-2225, January 2020.
This paper proposes a deep multi-task learning based unified design framework that enables end-to-end optimization of non-orthogonal multiple-access schemes. The key building blocks of conventional NOMA methods, i.e. multiple-access signature mapping and multi-user detection, are modelled and executed by DNN architectures, namely DeepMAS and DeepMUD modules, whilst the unified framework coherently integrates these modules using an auto-encoder structure to ensure joint configuration of the correlated learning tasks of NOMA schemes.
M. S. Sim, Y. G. Lim, S. H. Park, L. Dai, and C. B. Chae, “Deep Learning-based mmWave Beam Selection for 5G NR/6G with sub-6 GHz Channel Information: Algorithms and Prototype Validation,” IEEE Access, March 2020.
This paper addresses the latency issue inherent in mmW-based initial access procedures, primarily in the context of 5G NR systems. It proposes a supervised learning method that leverages sub-6GHz channel characteristics, mainly its power-delay profile (PDP) signatures, as an input and identifies the best mmW beams as an output of the DNN. The training set generation step includes collection of optimal mmW beam index and corresponding sub-6GHz PDP signature mapping during the standard network operation. The inference step returns the most likely mmW beam index utilizing the uplink sub-6GHz sounding reference signal (SRS) as an input to the trained DNN.