Welcome to the Media Center, where you can find the latest original video content from ComSoc's conferences and events. Featuring keynotes speakers, executive forums, keynote workshops, industry panels, and much more from ComSoc's events, including the IEEE Global Communications Conference (GLOBECOM) and the IEEE International Conference on Communications (ICC). These videos bring insights to you when you need it. Your ComSoc membership offers free access to many of these valuable contents simply by logging in with your IEEE account.
IEEE and Non-Members can purchase videos after logging into their IEEE Account. If you do not have an IEEE account, click 'Create Account" to create a FREE account to make a purchase.
When your phone is turned on, it says ALOHA - A Tribute to the AlohaNET pioneers. In Memoriam: Norman Abramson, Professor Emeritus of Electrical Engineering at the University of Hawa’ii and ALOHAnet co-founder.
We are arriving at the end of an era that has guided the ICT for the last century. Quite remarkably, many of the remarkable engineering breakthroughs in Communication (the famous “G” era) and Computing (the famous “Moore’s” era) were based on quite old Basics. Indeed, the Nyquist Sampling theorem dates back to 1924, the Shannon’s Law to 1948 and the Von Neumann Architecture to 1946. Today, we are desperately lacking guidance for new engineering solutions as we have approached those limits and there is a need for the whole industry to take its share of responsibility by re-investing massively in the fundamentals to revive a new century of engineering progress. In this talk, we will re-discuss the assumptions made a century ago and provide a research roadmap showcasing the fundamental role of Mathematics and Physics to unlock the theoretical barriers.
Barely seen in action movies until a decade ago, the progressive blending of UAVs into our daily lives will greatly impact labor and leisure activities alike. Most stakeholders regard reliable connectivity as a must-have for the UAV ecosystem to thrive, and the wireless research community has been rolling up its sleeves to drive a native and long-lasting support for UAVs in 5G and beyond. Moving up, the recent introduction of more affordable insertions into the low orbit is luring new players to the space race, making a marriage between the satellite and cellular industries more likely than ever. In this talk, we will navigate from 5G to 6G use cases, requirements, and enablers involving aerial and spaceborne communications, also acting as a catalyst for much-needed new research.
Exploiting the frequency ranges above 6 GHz has become a hallmark of modern wireless systems. The use of 20-100 GHz spectrum was a key characteristic of 5G systems, and the 100-500 GHz frequency range will be an important component in 6G. This talk will first discuss the characteristics of wireless propagation channels in those frequency bands, reviewing the fundamentals, and then discussing our recent measurement results in outdoor environments, including ones in the larger than 100 GHz frequency range that show feasibility of high-rate data links at distances up to 100 m in both line-of-sight and many non-line-of-sight situations; yet at the same time these measurements also indicate that many common assumptions about such high-frequency channels, e.g., with respect to sparsity, might not hold under all circumstances. Based on the discussions of the channels, the talk will then investigate single- and multi-user capacity, signaling methods and transceiver structures that are especially suitable for ultra-high data rates at these high frequency bands.
5G rollouts have stimulated new demand that cannot be met by 5G itself. That's where 5G-Advanced comes into play, delivering enhanced capabilities. Without a doubt, 5G-Advanced will further stimulate more new demands that only 6G can address. Looking into these new demands will be crucial to defining 6G. ITU-R is leading the consortium effort to study future technology trend (FTT) and 6G vision, aiming to issue the FTT report and vision recommendation by the end of 2022 and in the middle of 2023, respectively. 6G will go far beyond communications. 6G will serve as a distributed neural network that provides communication links to fuse the physical, cyber, and biological worlds, truly ushering in an era in which everything will be sensed, connected, and intelligent. In addition to connected people and things, we predict that 6G will be the platform for connected intelligence, where the mobile network connects vast amounts of intelligent devices and connects them intelligently. This talk will first start with 5G-advanced as an introduction, then present an overall vision for 6G with drivers, use cases, KPIs, roadmap and key capabilities. Six key capabilities: (1) Extreme connectivity, (2) Native AI, (3) Networked sensing, (4) Integrated Non-terrestrial network, (5) Native trustworthiness and (6) Sustainability, will be further discussed, including potential technologies/research directions and associated challenges.
Research activities in academia and industry worldwide towards the 6th generation (6G) mobile communication system have recently considerably gained momentum. In this overview we will highlight the anticipated 6G timeline and technology concepts which have to fulfil even more stringent requirements in comparison to 5G, such as ultra-high data rates, energy efficiency, global coverage and connectivity as well as extremely high reliability and low latency. One of the 6G technologies are sub-Terahertz and terahertz (THz) waves which have frequencies extending from 0.1 THz up to 10 THz and fall in the spectral region between microwave and optical waves. The prospect of offering large contiguous frequency bands to meet the demand for highest data transfer rates up to the terabit/sec range make it a key research area of 6G mobile communication. These efforts require an interdisciplinary approach, with close interaction of high-frequency semiconductor technology for RF electronics but also including alternative approaches using photonic technologies. The THz region also shows great promise for many applications areas ranging from imaging to spectroscopy and sensing. To fully exploit the potential of this frequency range it is also crucial to understand the propagation characteristics for the development of the future communication standards by performing channel measurements. We will highlight the characteristics of channel propagation in this frequency region and present new results from channel measurements at 158 GHz and 300 GHz.
Today, channel codes are among the fundamental parts of any communication system, including cellular, WiFi, and deep space, among others, enabling reliable communications in the presence of noise. Decades of research have led to breakthrough inventions of various families of channel codes. Yet no unified approach exists in answering these two fundamental questions: Given a channel, how do we efficiently construct the best possible code? And given a channel code, how do we design an efficient and optimal decoder? In this talk, we will discuss how the remarkable advancements in data-driven machine learning (ML) can be leveraged toward answering these questions. In particular, we will focus on a class of codes rooting in Plotkin recursive construction. This class includes Reed–Muller (RM) codes as the state-of-the art binary algebraic codes, as well as polar codes, the first capacity-achieving codes with explicit, i.e., non-randomized, constructions. In the first part of this talk, we will present an efficient and close-to-optimal decoder obtained for RM codes by learning a pruning process applied to an exponentially complex decoder. In the second part, we will tackle the fundamental problem of designing new channel codes. In particular, we will demonstrate KO codes, a new class of channel codes designed by training neural networks while preserving Plotkin-like structures. KO codes beat both of their RM and polar code counterparts, under the successive cancellation decoding, in the challenging short-to-medium blocklength regime. We will also discuss various challenges that should be overcome to pave the way for adopting such ML-aided channel coding strategies in practice.
Edge devices collect massive amounts of data, opening up new potentials for machine learning applications. Machine learning at the edge can benefit from exploiting both data and processing power distributed across many wireless devices, but this brings about many new challenges including the low latency requirements of learning applications, privacy concerns preventing data sharing, and the impact of noise and interference on the convergence of the learning process. Overcoming these challenges while meeting the requirements of the machine learning tasks calls for a new paradigm of semantic-oriented communication network design tailored for learning applications. In this talk, I will present recent results on efficient distributed inference and training over wireless networks taking into account channel impairments and power and bandwidth limitations of wireless devices, as well as the semantics of the underlying learning tasks. This will involve bringing together novel communication and coding techniques with distributed learning and inference algorithms.
As Wi-Fi "strikes again" with 802.11be, this forum will host a discussion on its evolution, the ongoing 802.11be standardization, the opportunities created by the progressive adoption of the 6 GHz spectrum, and the increased interest in supporting not only higher capacity but also reliable and low latency applications using Wi-Fi. Experts from industry and academia will share their experience in driving standard and product development, spectrum and technology regulations, and research visions.
Unmanned aerial vehicles (UAVs) have found fast growing applications during the past few years. As such, it is imperative to develop innovative communication technologies for supporting reliable UAV command and control (C&C), as well as mission-related payload communication. However, traditional UAV systems mainly rely on the simple direct communication between the UAV and the ground pilot over unlicensed spectrum (e.g., ISM 2.4GHz), which is typically of low data rate, unreliable, insecure, vulnerable to interference, difficult to legitimately monitor and manage, and can only operate within the visual line of sight (LoS) range. To overcome the above limitations, there has been significant interest in integrating UAVs into cellular communication systems. On the one hand, UAVs with their own missions could be connected into cellular networks as new aerial users. Thanks to the advanced cellular technologies and almost ubiquitous accessibility of cellular networks, cellular-connected UAVs are expected to achieve orders-of-magnitude performance improvement over the existing point-to-point UAV communications. It also offers an effective option to strengthen the legitimate UAV monitoring and management, and achieve more robust UAV navigation by utilizing cellular signals as a complement to GPS (Global Position System). On the other hand, dedicated UAVs could be deployed as aerial base stations (BSs), access points (APs), or relays, to assist terrestrial wireless communications from the sky, leading to another paradigm known as UAV-assisted communications. UAV-assisted communications have several promising advantages, such as the ability to facilitate on-demand deployment, high flexibility in network reconfiguration, high chance of having LoS communication links, and enable numerous applications such as BS traffic offloading, information dissemination and collection for Internet of Things (IoTs). UAV communications are significantly different from conventional communication systems, due to the high altitude and high mobility of UAVs, the unique channel of UAV-ground links, the asymmetric quality of service (QoS) requirements for downlink C&C and uplink mission-related data transmission, the stringent constraints imposed by the size, weight, and power (SWAP) limitations of UAVs, as well as the additional design degrees of freedom enabled by joint UAV mobility control and communication resource allocation.
Artificial intelligence (AI) and big data are both viewed as the cornerstone to build beyond-5G (B5G) zero-touch automated wireless networks. To harness the full potential of automation, AI algorithms should be driven by the distributed nature of datasets across the network. This distribution is sometimes due to the network topology itself, where performance data collection is performed per domain or node (e.g., radio access, edge cloud) but also produced by the applications running on scattered user devices. In such a case, opting for a centralized data collection system would result in high network bandwidth and energy consumption as well as a significant delay to transfer the data to the classical operational subsystem (OSS). The centralization would also breach the privacy and security of end-user applications. In this context, standardization efforts have been made to decentralize AI algorithms. In ETSI’s zero-touch architecture, for instance, each network domain is endowed with a data collection element that feeds a local AI analytics and decision entity. The central entity plays only the role of a coordinator/model aggregator without having access to the distributed raw datasets. A successful AI deployment should therefore be distributed in space-ranging from user devices to core network-and evolving in time-from collaborative AI to advanced federated learning. In this intent, active research works have been carried out to come up with efficient distributed AI architectures. The main challenges faced by researchers reside in the cost incurred due to the bidirectional communication between the locally trained models and the global one. This cost is indeed determined by the number of iterations until convergence as well as the underlying energy consumption per channel use. Additionally, deploying AI at edge devices would require the adoption of low-complexity models intended to run on optimized dedicated hardware to preserve battery lifetime. A decentralized solution with complex models is therefore not viable. Decentralized AI has multi-fold use cases. User devices with dedicated AI chips might benefit from a higher degree of security and privacy since they would prevent the exchange of any raw data with centralized cloud servers. They might also present a quick reaction time with locally taken decisions, which is adequate for low-latency applications as well as for mitigating security risks. On the other hand, the density of network nodes or the exponential increase in user devices would induce no significant complexity since network intelligence is scattered among a massive number of nodes and user equipments offering thereby a high degree of scalability.
KEYNOTE 1: DISTRIBUTED MACHINE LEARNING AT THE WIRELESS EDGE SPEAKER: PROF. DENIZ GÜNDÜZ IMPERIAL COLLEGE LONDON, UK Abstract: IoT devices collect significant amount of data at the wireless edge, opening up new potentials for machine learning applications. Current approach to edge intelligence is to offload all the collected data to a cloud server for central processing. This approach is not sustainable considering the expected growth in the number of IoT devices and the traffic they generate. Moreover, it creates significant privacy risks for the users, and introduces delays that cannot be tolerated by most applications. The alternative is to bring the intelligence to the edge, by distributing both the training and the inference tasks across edge devices and servers. In this talk, I will present recent results on efficient distributed inference and training over wireless channels taking into account channel impairments as well as power and bandwidth limitations of wireless devices. This will involve bringing together novel communication and coding techniques with distributed learning algorithms. SPEAKER: JULIEN FORGEAT, ARTIFICIAL INTELLIGENCE, ERICSSON RESEARCH Bio: Julien Forgeat is an artificial intelligence principal researcher at Ericsson Research. He joined Ericsson in 2010 after spending several years working on network analysis and optimization. He holds an M.Eng. in computer science from the National Institute of Applied Sciences in Lyon, France. At Ericsson, Julien has worked on mobile learning, Internet of Things and big data analytics before specializing in machine learning and AI infrastructure. His current research focuses on the software components required to run AI and machine learning workloads on distributed infrastructures as well as the algorithmic approaches that are best suited for complex distributed and decentralized use-cases.
The new generation of Internet of Things involves Internet of Mobile Things (IoMT) which lets increasingly moving objects make better operational decisions through pooling data and resources from other connected vehicles and devices. Due to the enormous research and commercial potential, a lot of companies and researchers are attracted to this area. This workshop aims to bring researchers working on Future IoMTs under one roof to discuss the implementation, applications, and possible standardization efforts. We expect that the authors can together bring about significant impacts within this domain and share their knowledge and experiences with members of the research community, commercial sector and wider audiences.
This academic keynote is on Future of MIMO Communication. Bio: Robert W. Heath Jr. received the Ph.D. in EE from Stanford University. He is a Distinguished Professor at North Carolina State University. He is also the President and CEO of MIMO Wireless Inc. Prof. Heath is a recipient of several awards including recently the 2016 IEEE Communications Society Fred W. Ellersick Prize, the 2016 IEEE Communications Society and Information Theory Society Joint Paper Award, the 2017 IEEE Marconi Prize Paper Award, the 2017 EURASIP Technical Achievement Award, the 2019 IEEE Communications Society Stephen O. Rice Prize, the 2019 IEEE Kiyo Tomiyasu Award, and the 2020 IEEE SPS Donald G. Fink Overview Paper Award. He co-authored “Millimeter Wave Wireless Communications” (Prentice Hall in 2014) and "Foundations of MIMO Communications" (Cambridge 2019). He was EIC of IEEE Signal Processing Magazine from 2018-2020. He is a current member-at-large of the IEEE Communications Society Board-of-Governors (2020-2022) and a past member-at-large of the IEEE Signal Processing Society Board-of-Governors (2016-2018). He is a licensed Amateur Radio Operator, a registered Professional Engineer in Texas, a Private Pilot, a Fellow of the National Academy of Inventors, and a Fellow of the IEEE.
This VIP keynote panel is on the The Art of the Possible—Three Tech Leaders Share Their Practical Insights and Vision Around a Few of the Biggest Trends in the Industry. PANELISTS: TODD ZEILER Assistant Vice President of Network Services, AT&T Bio: Todd is Assistant Vice President of Network Services. His team owns Global Network Architecture, Implementation, Inter-Carrier Usage Mediation/Delivery, and Network Operations for wholesale, domestic, & international roaming as well as network sharing services. His team’s mission statement is to “paint the world AT&T blue” with a seamless mobility experience. Todd recently transitioned from a 4yr stint as Director Member of Technical Staff Converged Access & Device Technology where his team owned wireless access architecture for 5G, LTE Advanced/Pro, IoT, FirstNet, Fixed Wireless, and Enterprise.He has >25+ years of industry experience beginning his career in BellSouth Outside Plant Engineering in 1992. Todd’s larger projects included the program lead over the integration of ATT’s purchase of Alltel in 2009 and various technology overlays including a the recent 5G architecture evolution roadmap. Todd has held positions in outside plant, wireless operations, RF engineering, RF performance, systems automation, equipment engineering, project management, mobility core planning, M&A projects, in-building mobility (ASG), and was the Director for GA Radio Access Network prior to his role in the CTO Wireless Architecture Organization.Todd holds a Bachelor’s in Electrical Engineering from Auburn University. Todd resides in Atlanta and is married with 3 daughters and enjoys being/teaching with his church family, speaking engagements, as well as enjoying sports and other outside activities.Kevin SheehanKEVIN SHEEHAN CTO of the Americas, Ciena Bio: Kevin Sheehan serves as CTO of the Americas and VP of Strategic Solution Sales for Ciena. He has more than 25 years of experience leading high-performance cross-functional teams and building very successful product lines and early-stage companies. Prior to his current role at Ciena, Kevin was General Manager of Ciena Agility, where he was responsible for building and leading Ciena’s software business. Prior to that, Kevin was a key leader and strategist within one of Ciena’s fastest-growing business segments while serving as Ciena’s Vice President of Product Line Management for packet networking solutions. Before his time at Ciena, from 2003 to 2011, Kevin was CEO of Hatteras Networks, where he led the company from zero revenue to tens of millions in annual revenue with profitable growth. Before joining Hatteras, Kevin held senior leadership positions with Alcatel, Packet Engines and SMC. Kevin holds a Bachelor’s Degree in Engineering and a Master of Science degree from Stony Brook University in New York, and a Master of Business Administration from Dowling College. Kevin has been globally recognized with American Business Awards “Stevie Award” as Best Telecommunications CEO in 2008 and Light Reading’s Leading Lights CEO of the Year Award in 2006.Ibrahim GedeonIBRAHIM GEDEON CTO, TELUS Bio: Ibrahim Gedeon is one of the global telecommunications industry’s eminent thought leaders. He has carved out an international career by combining insight and skill as an applied scientist with a lighthearted approach to leadership. As Chief Technology Officer for TELUS, a leading national telecommunications company in Canada, he is responsible for all technology development and strategy, security, service and network architecture, service delivery and operational support systems, as well as service and network convergence, and network infrastructure strategies and evolution. Under his leadership the TELUS wireless broadband network has become one of the best in the world. Ibrahim serves on the board of the Next Generation Mobile Networks Alliance, the Alliance for Telecommunications Industry Solutions and the Institute for Communication Technology Management. In addition to his industry leadership roles, he has been awarded with IEEE Communications Society’s prestigious Distinguished Industry Leader Award and elected a Fellow of the Canadian Academy of Engineering (CAE) for his significant contributions to the field of engineering. Ibrahim has also been named one of the 100 most powerful and influential people in the telecoms industry in Global Telecoms Business magazine’s GTB Power 100. Ibrahim holds a Bachelor's degree in Electrical Engineering from the American University of Beirut, a Master’s in Electronics Engineering from Carleton University and an Honourary Doctor of Laws degree from the University of British Columbia and is passionate about supporting engaged, high-performing teams.
Future wireless systems will require a paradigm shift in how they are networked, organized, configured, optimized, and recovered automatically, based on their operating situations. Emerging Internet of Things (IoT) and Cyber-Physical Systems (CPS) applications aim to bring people, data, processes, and things together, to fulfill the needs of our everyday lives. With the emergence of software defined networks, adaptive services and applications are gaining much attention since they allow automatic configuration of devices and their parameters, systems, and services to the user's context change. It is expected that upcoming Fifth Generation and Beyond (5G&B) wireless networks, known as more than an extension to 4G, will be the backbone of IoT and CPS, and will support IoT systems by expanding their coverage, reducing latency and enhancing data rate. However, there are several challenges to be addressed to provide resilient connections supporting the massive number of often resource-constrained IoT and other wireless devices. Hence, due to several unique features of emerging applications, such as low latency, low cost, low energy consumption, resilient and reliable connections, traditional communication protocols and techniques are not suitable.
As hordes of data-hungry devices challenge its current capabilities, Wi-Fi strikes again with 802.11be, alias Wi-Fi 7. This brand-new amendment promises a (r)evolution of unlicensed wireless connectivity as we know it, unlocking access to gigabit, reliable and low-latency communications, and reinventing manufacturing and social interaction through digital augmentation. More than that, time-sensitive networking protocols are being put forth with the overarching goal of making wireless the new wired. With its standardization process being consolidated, we will provide an updated digest of 802.11be essential features, place the spotlight on some of the must-haves for critical and delay-sensitive applications, and illustrate their benefits through standard-compliant simulations.
At present the O-RAN architecture provides a promising solution of an open-RAN ecosystem, where based on the defined functional splits (CU, DU, RU) a multi-vendor solution can theoretically be achieved. This so called “wave 1.0” 5G that is capable of utilizing only basic (rough) virtualization as well as introducing essential interfaces to enable open-ecosystem, like: E2 for the control of CU/DU/RU as well as A1, O1, O2 for policy based management, network configuration and monitoring. The existing state-of-the art based on IS-Wireless analysis and experiences (also as O-RAN member) should be upgraded to what we call open-RAN Wave 2.0 in order to allow greater flexibility of functional split as well as improve the capability of addressing the challenges of ultra-dense networks. Flexibility of functional splits is essential to adjust open-RAN based networks to the existing infrastructure capabilities including not only fronthaul but also midhaul interfaces. Fronthaul is understood mainly as splits beyond 6 and especially the 7.2 O-RAN split that requires a certain level of capacity, which may be even quadrupled with the split 7.1. In the midhaul e.g. where the CU-CP with RIC (RAN intelligent controller), CORE, MEC and application servers are located, the infrastructure can also vary in capacity. With highly granularized network functions packaged as VNF/CNF (virtual machines of containers) and also providing multitude of split options it is easier to tailor deployment of open-RAN network to fit into available fronthaul and to optimize cost of hardware and network. Moreover, it is then more convenient to orchestrate such “workloads” (i.e. 5G radio stack functions) across edge-cloud continuum, also including edge micro data centers. In this way, multiple split association types can also be achieved naturally e.g. split per slice, per UE, per bearer. The underlying compute resources can also be utilized more efficiently as particular workloads can be fitted to a variety of acceleration cards (GPU, FPGA, SmartNIC) or computer architectures (x86, ARM). Eventually such fine grained, highly composable (orchestrated) disaggregated open-RAN can be called open-RAN Wave 2.0, as it enables achieving higher capacities for network operators who are aiming to address the challenges of ultra-dense networks. Efficient data-driven resource management (both radio and compute) with the novel paradigms like cell-free (or distributed cell-free massive MIMO) are becoming more straightforward to be implemented with such improved open-RAN architectures.