Skip to main content
Publications lead hero image abstract pattern


Publication Date

Second Quarter 2023

Manuscript Submission Deadline

Special Issue

Call for Papers

Distributed machine learning is envisioned as the bedrock of future intelligent networks and Internet-of-Things (IoT), where intelligent agents exchange information with each other to train learning models collaboratively without uploading data to a central processor. Despite its broad applicability, a downside of distributed learning is the need of iterative information exchange between agents, which may lead to high communication overhead unaffordable in many practical systems with limited communication resources such as energy and bandwidth. To resolve this communication bottleneck, we need to devise communication-efficient distributed learning algorithms and protocols that can reduce the communication cost and achieve satisfactory learning/optimization performance simultaneously. Accomplishing this goal necessitates synergistic techniques from a diverse set of fields, including optimization, machine learning, wireless communications, game theory, and network/graph theory. This special issue aims at collecting contributions on communication-efficient distributed learning from a multitude of perspectives, including fundamental theories, algorithm design and analysis, and practical considerations. We solicit high-quality original papers on topics including, but not limited to:

  • Techniques reducing the number/frequency of communications (e.g., event-triggering, local updates with infrequent information exchange) in distributed learning
  • Quantization, sparsification, and compression methods for distributed learning
  • Novel methods for distributed learning with limited communication resources such as energy and bandwidth
  • Fundamental performance limits for distributed learning with limited communication resources
  • Impact of network topology (e.g., time-varying graph, directed graph) on communication-efficient distributed learning
  • Efficient distributed learning with practical communication conditions, such as wireless interference, noisy/time-varying/fading channels, and multiple access
  • Game-theoretic mechanisms incentivizing users with limited communication resources to participate in and devote sufficient resources to distributed learning
  • Network resource management (e.g., spectrum/power allocation) for communication-efficient distributed learning
  • Communication-efficient distributed inference
  • Communication-efficient distributed reinforcement/meta/deep learning and other novel learning paradigms
  • Communication-efficient distributed learning for emerging applications such as IoT and unmanned vehicles
  • Novel network protocols and architectures for communication-efficient distributed learning

Submission Guidelines

Prospective authors should prepare their manuscripts in accordance with the IEEE JSAC format. Papers should be submitted through EDAS according to the following schedule:

Important Dates

Manuscript Submission: 15 April 2022
First Notification: 1 July 2022
Revised Papers Due: 15 August 2022
Final Acceptance Notification: 1 November 2022
Final Manuscript Due: 15 November 2022
Publication: Second Quarter 2023

Guest Editors

Xuanyu Cao (Lead Guest Editor)
Hong Kong University of Science and Technology, Hong Kong

Tamer Başar
University of Illinois at Urbana-Champaign, USA

Suhas Diggavi
University of California, Los Angeles, USA

Yonina Eldar
Weizmann Institute of Science, Israel

Khaled B. Letaief
Hong Kong University of Science and Technology, Hong Kong

H. Vincent Poor
Princeton University, USA

Junshan Zhang
University of California, Davis, USA