WELCOME TO THE 3RD WORKSHOP ON "EDGE LEARNING OVER 5G MOBILE NETWORKS AND BEYOND"
4-8 December 2022, Rio de Janeiro, Brazil
STEERING COMMITTEE MEMBERS:
- Petar Popovski, Department of Electronic Systems, Aalborg University, Denmark (firstname.lastname@example.org).
- Robert Schober, Institute for Digital Communications, Friedrich-Alexander University of Erlangen-Nuremberg, Germany (email@example.com).
- Rui Zhang, Department of Electrical and Computer Engineering, National University of Singapore, Singapore (firstname.lastname@example.org).
- Mingzhe Chen, Princeton University, USA (email@example.com).
- Zhaohui Yang, University College London, UK (firstname.lastname@example.org).
- Changsheng You, Southern University of Science and Technology, China (email@example.com).
- Kaibin Huang, University of Hong Kong, HK (firstname.lastname@example.org).
- Ayfer Özgür, Stanford, CA, USA (email@example.com).
SCOPE AND TOPICS OF THE WORKSHOP
Nowadays, standard machine learning approaches require centralizing the training data on a single data center or cloud. Since massive data samples need to be uploaded to the data center, transmission delay can be very high and user privacy is not guaranteed in standard centralized machine learning approaches. However, low-latency and privacy requirements are important in the emerging application scenarios, such as unmanned aerial vehicles, extended reality (XR) services, autonomous driving, which makes centralized machine learning approaches inapplicable. Moreover, due to limited communication resources, it is impractical for all the wireless devices that are engaged in learning to transmit all of their collected data to a data center that uses a centralized learning algorithm for data analytic or network self-organization. Therefore, it becomes increasingly attractive to deploy learning algorithms at edge devices, called edge learning. A typical edge learning framework (e.g., federated learning) features distributed learning over many wireless enduser devices cooperating with edge devices, such as access points or base stations, to train a common AI model using local data. This scenario typically involves an iterative learning process, repeatedly downloading and uploading of possibly high-dimensional (millions to billions) model parameters or their updates by tens to hundreds of edge devices. This may generate substantial data traffic, placing a heavy burden on already congested radio access networks. The training problem cannot be efficiently solved using traditional wireless techniques targeting rate maximization and decoupled from learning. Achieving the goal of edge learning with high communication efficiencies calls for the designs of new wireless techniques based on a communication-and-learning integration approach.
Thus, this workshop seeks to bring together researchers and experts from academia, industry, and governmental agencies to discuss and promote the research and development needed to overcome the major challenges that pertain to this cutting-edge research topic. Suitable topics for this workshop include, but are not limited to, the following areas:
- Secrecy of edge learning algorithms
- Over-the-air computation for edge learning
- Fundamental limits of edge learning systems
- Wireless network optimization for improving the performance of edge learning
- Data compression for edge learning
- Adaptive transmission for edge learning
- Techniques for wireless crowd labelling
- Modeling and performance analysis of edge learning networks
- Energy efficiency of implementing machine learning over wireless edge networks
- Ultra-low latency edge learning and inference
- Experiments and testbeds on edge learning
- Privacy and security issues in edge learning
- Edge learning for intelligent signal processing
- Edge learning for user behavior analysis and inference
- Distributed reinforcement learning for network decision making, network control, and management
Paper Submission Deadline: 15 July 2022
Notification of Acceptance: 15 September 2022
Camera-ready Papers: 7 October 2022
Workshop Date: TBD