IEEE Global Communications Conference
4–8 December 2022 // Rio de Janeiro, Brazil // Hybrid: In-Person and Virtual Conference
Accelerating the Digital Transformation through Smart Communications

Technical Tutorials

All of the tutorials will be available for access on-demand through the conference virtual platform for those attendees with a Tutorial registration. Some of the tutorials will take place in-person in Rio de Janeiro and some will be presented remotely but live. The list is below. Please note: This list is subject to change depending on how many people sign up for these tutorials in advance.

Sunday 4 DECEMBER (Morning: 8:00 - 11:30)

IN-PERSON:

VIRTUAL:

Sunday 4 DECEMBER (Afternoon: 14:00 - 17:30)

IN-PERSON:

TU-09: Evolution of NOMA Toward Next Generation Multiple Access
TU-10: Terahertz Communications for 6G and Beyond: How Far Are We?
TU-11: Understanding O-RAN: A Tutorial on Architecture, Interfaces, Algorithms, Security, and Research

VIRTUAL:

TU-12: Semantic Communications: Transmission Beyond Shannon Paradigm
TU-13: Deep Learning for Physical Layer Security: Towards Context-aware Intelligent Security for 6G Systems
TU-14: Distributed Machine Learning for 6G Networks: A Tutorial
TU-15: Holographic Radio: A New Paradigm for Ultra-Massive MIMO
TU-16: IEEE 802.11be and Beyond: All You Need to Know about Next-generation Wi-Fi

Thursday 8 December 2022 (Morning: 8:00 - 11:30)

IN-PERSON:

TU-02: The Use of Intents as a Key Enabler for Autonomous Networks
TU-19: Wireless Information and Energy Transfer in the Era of 6G Communications

VIRTUAL:

Thursday 8 December 2022 (Afternoon: 14:00 - 17:30)

IN-PERSON:

TU-25: Ultra-Dense LEO Satellite-Based Communication Systems: A Tractable Modelling Technique
TU-26: Compute-Caching-Communication Integration for Efficient Delivery of Metaverse Experiences 
TU-27: Open RAN Security and Privacy: Opportunities and Challenges

VIRTUAL:

TU-28: Post-Deep Learning Era: Emerging Quantum Machine Learning for Sensing and Communications
TU-29: Meta Learning for Future Wireless Networks: Basics and Applications
TU-30: Realizing the Metaverse with Edge Intelligence: A Tutorial
TU-31: Localization-of-Things in Beyond 5G Ecosystem
TU-32: Wireless for Machine Learning 


LIST OF TUTORIALS

The complete list of tutorials is as follows. The pre-recording presentations of these tutorials will be available for on-demand access through the conference virtual platform. 

TU-01: On the Road From Classical to Quantum Communications
TU-02: The use of intents as a key enabler for autonomous networks
TU-03: Machine Learning and Security for Vehicular Networks
TU-04: Reconfigurable Intelligent Surfaces: Electromagnetic models, design, and future directions
TU-05: AI-Enhanced MIMO Technologies for Communications and Sensing
TU-06: Network Slicing for 6G: Technics, Standards, and Applications
TU-07: Post-Shannon Communications – Breaking the Shannon Limit for 6G
TU-08: Internet of Bio-Nano Things: Getting Practical with Molecular Communications
TU-09: Evolution of NOMA Toward Next Generation Multiple Access
TU-10: Terahertz Communications for 6G and Beyond: How Far Are We?
TU-11: Understanding O-RAN: A Tutorial on Architecture, Interfaces, Algorithms, Security, and Research
TU-12: Semantic Communications: Transmission Beyond Shannon Paradigm
TU-13: Deep Learning for Physical Layer Security: Towards Context-aware Intelligent Security for 6G Systems
TU-14: Distributed Machine Learning for 6G Networks: A Tutorial
TU-15: Holographic Radio: A New Paradigm for Ultra-Massive MIMO
TU-16: IEEE 802.11be and Beyond: All You Need to Know about Next-generation Wi-Fi
TU-17: Towards a Wireless Metaverse:  A Confluence of Extended Reality (XR), Artificial Intelligence (AI) and Semantic Communications
TU-18: Deep Learning for the Physical Layer: A Hands-on Experience
TU-19: Wireless Information and Energy Transfer in the Era of 6G Communications
TU-20: Interplay between Sensing and Communications: Fundamental Limits, Signal Processing, and Prototyping
TU-21: Wireless Blockchain Networks for Applications of Cyber-Physical Systems
TU-22: Scalable, accurate, and privacy-preserving localization in B5G Wireless Networks
TU-23: Edge Artificial Intelligence for 6G: Scalability, Trustworthiness, and Applications
TU-24: Wireless Channel Measurements, Characteristics Analysis, and Models Towards 6G
TU-25: Ultra-Dense LEO Satellite-based Communication Systems: A Tractable Modelling Technique
TU-26: Compute-Caching-Communication Integration for Efficient Delivery of Metaverse Experiences
TU-27: Open RAN Security and Privacy: Opportunities and Challenges
TU-28: Post-Deep Learning Era: Emerging Quantum Machine Learning for Sensing and Communications
TU-29: Meta Learning for Future Wireless Networks: Basics and Applications
TU-30: Realizing the Metaverse with Edge Intelligence: A Tutorial
TU-31: Localization-of-Things in Beyond 5G Ecosystem
TU-32: Wireless for Machine Learning


TU-01: On the Road From Classical to Quantum Communications

VIRTUAL 

Presenter:
Lajos Hanzo (University of Southampton, UK)

Boigraphy:
Lajos Hanzo (http://www-mobile.ecs.soton.ac.uk) FREng, FIEEE, FIET, Eurasip Fellow, DSc holds the Chair of Telecommunications at Southampton University, UK. He co-authored 19 IEEE Press - John Wiley books and 2000+ research contributions at IEEE Xplore, organized and chaired major IEEE conferences and has been awarded a number of distinctions. His research is funded by the European Research Council's Advanced Fellow Grant(University of Southampton, UK.

Abstract:
The marriage of ever-more sophisticated signal processing and wireless communications has led to compelling 'tele-presence' solutions - at the touch of a dialling key. However, the 'quantum' leaps both in digital signal processing theory and in its nano-scale based implementation is set to depart from classical physics obeying the well-understood laws revealed by science.  We embark on a journey into the weird and wonderful world of quantum-physics, where the traveller has to obey the sometimes strange new rules of the quantum-world. Hence we ask the judicious question: can the marriage of applied signal processing and communications extend beyond the classical world into the quantum world?


TU-02: The use of intents as a key enabler for autonomous networks

THURSDAY, DEC 8 8:00 - 11:30  /  LOCATION: Capri I

Presenters:

Jörg Niemöller (Ericsson, Sweden); Leonid Mokrushin (Ericsson, Sweden); Pedro Henrique Gomes (Ericsson, Brazil)

Boigraphy:
Jörg Niemöller is an analytics and customer experience expert in Solution Area OSS. He joined Ericsson in 1998 and spent several years at Ericsson Research, where he gained experience working with machine-reasoning technologies and developed an understanding of their business relevance for autonomous zero-touch operation. He is currently driving the introduction of these technologies into Ericsson’s portfolio of Operations Support Systems/Business Support Systems solutions. Jörg is also the author of the suite of guidebooks and models for intents published by TM Forum. Jörg Niemöller holds a Ph.D. in computer science from Tilburg University, the Netherlands, and a diploma degree in electrical engineering from the TU Dortmund University, Germany.

Leonid Mokrushin (Ericsson, Sweden) is a principal researcher at Ericsson Research. With a background in computer science and formal methods, he is currently focusing on knowledge-intensive symbolic AI systems and their practical applications in the telecom domain. He joined Ericsson in 2007 after postgraduate studies at Uppsala University in Sweden, where he specialized in the formal verification of real-time systems. He holds an M.S. in software engineering from Peter the Great St. Petersburg Polytechnic University, Russia.

Pedro Henrique Gomes (Ericsson, Brazil) is a senior researcher at Ericsson Research, engaged in management of 5G networks and services. He is a delegate in the ETSI Zero-Touch Network & Service Management working group, contributing to the architecture definition especially with closed-loop automation and intent-driven enablers. He received a Ph.D. (2019) and M.Sc. (2015) in electrical engineering from the University of Southern California, Los Angeles, USA, and M.Sc. (2011) in computer science from the University of Campinas, Brazil.

Abstract:
5G and beyond-5G network operations are becoming increasingly complex as services become more dynamic and heterogeneous. This complexity may hinder the CSPs’ ability to manage and assure the desired customer experience at a reasonable cost using current tooling. Therefore, rules-based operations are increasingly replaced by model- and knowledge-driven approaches. The final goal is to realize fully autonomous networks that are capable to adapt and evolve as network conditions and services requirements change over time. 
The vision of autonomous networks has been designed by TM Forum’s autonomous networks project. It relies on the ability to use of intents as a standardized way of capturing the requirements for network operations at all levels (i.e., business, services, resources, etc.). This is aligned with the more recent understanding of intents, especially in the telecom industry, where intents are “the formal specifications of all expectations including requirements, goals, and constraints given to a technical system” [2]. 


The TM Forum’s Autonomous Networks Project (ANP) has worked on the definition of an autonomous network’s architecture [1] and on a set of guidelines [2-7] that provide all the details on the use of intents in autonomous networks. The guidelines encompass multiple aspect related to intent-driven management, including: (i) the principles for intent-driven operations, (ii) formal definition of management functions involved in the intent life cycle management, (ii) an intent-based interface, (iv) a common information model for intent representation, (v) a federation model approach that allow extensibility of the intent representation, etc. All these concepts form a foundation for intent-based operations.
The objective of the tutorial is to disseminate the concept of intents and intent-driven management that has been standardized by TM Forum, and demonstrate how this concept can be leveraged by the communications community, including students, researchers and other professionals in telecom, cloud, AI/ML, etc.


TU-03: Machine Learning and Security for Vehicular Networks

SUNDAY, DEC 4 8:00 - 11:30  /  LOCATION: Capri III

Presenter:
Yi Qian (University of Nebraska-Lincoln, USA)

Boigraphy:
Yi Qian received a Ph.D. degree in electrical engineering from Clemson University. He is currently a professor in the Department of Electrical and Computer Engineering, University of Nebraska-Lincoln (UNL). Prior to joining UNL, he worked in the telecommunications industry, academia, and government. His research interests include communication networks and systems, and information and communication network security. Prof. Yi Qian is a Fellow of IEEE. He was previously Chair of the IEEE Technical Committee for Communications and Information Security. He was the Technical Program Chair for 2018 IEEE International Conference on Communications. He has been served on the Editorial Boards of several international journals and magazines, including as the Editor-in-Chief for IEEE Wireless Communications between July 2018 and June 2022. He was a Distinguished Lecturer for IEEE Vehicular Technology Society and a Distinguished Lecturer for IEEE Communications Society.  Prof. Yi Qian received the Henry Y. Kleinkauf Family Distinguished New Faculty Teaching Award in 2011, the Holling Family Distinguished Teaching Award in 2012, the Holling Family Distinguished Teaching/Advising/Mentoring Award in 2018, and the Holling Family Distinguished Teaching Award for Innovative Use of Instructional Technology in 2018, all from University of Nebraska-Lincoln. He is the principal author of the textbook, “Security in Wireless Communication Networks”, published by IEEE Press/Wiley in 2021.

Abstract:
Vehicular networks have been considered as a promising solution to achieve better traffic management and to improve the driving experience of drivers.  However, vehicular networks are susceptible to various security attacks. Due to the wireless nature of vehicular communications, how to secure vehicular networks are great challenges that have hampered the implementation of vehicular network services. Many solutions have been proposed by researchers and the industry in the recent years. In this tutorial, we first present an overview of security issues for vehicular networks, followed by a survey on the state-of-the-art solutions on security for vehicular networks. After that, we present two case studies on misbehavior detections in vehicular communication networks, one by introducing machine learning and reputation-based misbehavior detection systems to enhance the detection accuracy as well as to ensure the reliability of both vehicles and messages, another by introducing a deep reinforcement learning based dynamic reputation policy for misbehavior detections in vehicular networks. These types of misbehavior detection systems are trained using datasets generated through extensive simulations based on the realistic vehicular network environment. We show that various machine learning schemes can be exploited in accurately identifying several misbehaviors in vehicular networks.


TU-04: Reconfigurable Intelligent Surfaces: Electromagnetic models, design, and future directions

VIRTUAL

Presenter:
Alessio Zappone (University of Cassino and Southern Lazio, Italy); Marco Di Renzo (CNRS - CentraleSupelec - Univ. Paris-Sud, Paris, France)

Boigraphy:
Alessio obtained his Ph.D. degree in electrical engineering in 2011 from the University of Cassino and Southern Lazio, Cassino, Italy. From 2012 to 2016, he has been with TU Dresden, Germany, managing the project CEMRIN on energy-efficient resource allocation in wireless networks, funded by the German Research Foundation. From 2017 to 2019 he has been with CentraleSupelec, Paris, France, as the recipient of the H2020 Individual Marie Curie fellowship for experienced researchers BESMART. He is now a tenured professor at the university of Cassino and Southern Lazio, Italy. He received the IEEE Marconi Prize paper award and the EURASIP best paper awards for his research on RIS-based networks. Alessio is an IEEE Senior Member, serves as senior area editor for the IEEE SIGNAL PROCESSING LETTERS, as Editor of the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS and has been a guest editor for the IEEE JOURNAL ON SELECTED AREAS ON COMMUNICATIONS.  Alessio is a co-founder and chair of the special interest group “REFLECTIONS”, activated within the Signal Processing and Computing for Communications Technical Committee of the IEEE Communications Society, which focuses on the use of RIS for signal processing and communications. He is also a co-founder and vice-chair of the IEEE emerging technology initiative (ETI) on RIS, activated by the IEEE communication society.

Marco Di Renzo received the Ph.D. degrees in electrical engineering from the University of L’Aquila, Italy, in 2007. Since 2010, he has been with the French National Center for Scientific Research, where he is a CNRS Research Director in the Laboratory of Signals and Systems of Paris-Saclay University - CNRS and CentraleSupelec. He served as Editor and Associate Editor-in-Chief of IEEE Communications Letters, as an Editor of IEEE Transactions on Communications and IEEE Transactions on Wireless Communications, and guest editor of IEEE Journal on Selected Areas in Communications. He is now the EiC of IEEE Communications Letters. He is a Highly Cited Researcher (Clarivate Analytics, Web of Science), a World’s Top 2% Scientist from Stanford University, a Fellow of IEEE and IET. He has received the IEEE Communications Society Best Young Researcher Award for Europe, Middle East and Africa, the SEE-IEEE Alain Glavieux Award, the 2021 EURASIP Best Paper Award, the Fulbright Fellowship to work on metamaterial-based wireless at CUNY Advanced Science Research Center, USA. Marco is the Founding Chair of the Special Interest Group “RISE” on Reconfigurable Intelligent Surfaces of the Wireless Technical Committee of the IEEE Communications Society, the Founding Lead Editor of the IEEE Communications Society Best Readings in Reconfigurable Intelligent Surfaces, and a Co-Founder and the Emerging Technology Committee Liaison Officer of the Emerging Technology Initiative on RIS. Marco Di Renzo is the Vice-Chair of the Industry Specification Group on Reconfigurable Intelligent Surfaces within ETSI.

Abstract:
Between 2020 and 2030, the number of IP connections will rise by 55% annually, reaching 607 exabytes in 2025 and 5,016 exabytes in 2030. In addition, future wireless networks will have to support many innovative vertical services, each with its own specific requirements, e.g. end-to-end latency of 1 ns and reliability higher than 99.999% for URLLCs, terminal densi- ties of 1 million of terminals per square kilometer for massive IoT applications, per-user data-rate of the order of Tera-bit/s for broadband applications, terminal location accuracy of the order of 0.1 m for V2X communications. These requirements are beyond what 5G networks have been designed to handle.


A recent technological breakthrough that holds the potential to revolutionize the traditional approach to wireless network design and operation is that of reconfigurable intelligent surfaces (RISs). RIS-based communications put forth the idea of treating the communication environment not as an entity fixed by nature, but as a variable to be customized. RISs are nearly-passive structures with very limited power consumption, size, and deployment costs. RISs are planar structures made of special materials, known as meta-materials, on which elementary electromagnetic reflectors are placed and spaced at sub-wavelength distances. A RIS provides the possibility of adapting its electromagnetic response in real-time in response to changes in the network and/or traffic demands. RISs can be deployed on the walls of buildings or can coat environmental objects between the communicating devices, which turns the wireless channel into a new variable to be optimized. Moreover, thanks to their reduced size and cost, a RIS can be equipped with a number of electromagnetic reflectors that is significantly larger than the number of antennas of an active (massive) MIMO antenna array.


TU-05: AI-Enhanced MIMO Technologies for Communications and Sensing

VIRTUAL

Presenter:
Feifei Gao (Tsinghua University Beijing, China); Shun Zhang (Xidian University, China); Zhen Gao (Beijing Institute of Technology, China)

Boigraphy:
Feifei Gao (F’20) received the B.Eng. degree from Xi’an Jiaotong University, Xi’an, China, in 2002, the M.Sc. degree from McMaster University, Hamilton, ON, Canada, in 2004, and the Ph.D. degree from the National University of Singapore, Singapore, in 2007. Since 2011, he has been with the Department of Automation, Tsinghua University, Beijing, China, where he is currently an Associate Professor. His research interests include signal processing for communications, array signal processing, convex optimizations, and artificial intelligence assisted communications. He has authored/coauthored more than 150 refereed IEEE journal articles and more than 150 IEEE conference proceeding papers that are cited more than 8800 times in Google Scholar. He has served as a technical committee member for more than 50 IEEE conferences. He has also served as the Symposium Co-Chair of the 2019 IEEE International Conference on Communications (ICC), the 2018 IEEE Vehicular Technology Conference (VTC) Spring, and the 2015 IEEE International Conference on Communications (ICC), etc. He has served as an Editor for IEEE Transactions on Wireless Communications, IEEE Transactions on Cognitive Communications and Networking, IEEE Wireless Communications Letters, and China Communications, a Lead Guest Editor for IEEE Journal of Selected Topics in Signal Processing, and a Senior Editor for IEEE Signal Processing Letters and IEEE Communications Letters.

Shun Zhang (Senior Member, IEEE) received the B.S. degree in communication engineering from Shandong University, Jinan, China, in 2007, and the Ph.D. degree in communications and signal processing from Xidian University, Xi’an, China, in 2013. He is currently with the State Key Laboratory of Integrated Services Networks, Xidian University, where he is currently an Associate Professor. His research interests include massive MIMO, millimeter wave systems, RIS assisted communications, deep learning for communication systems, orthogonal time frequency space (OTFS) systems, and multiple access techiques. He is an Editor for Physical Communication. He has authored or coauthored more than 80 journal and conference papers, and is the inventor of 16 granted patents (including a PCT patent authorized by US Patent and Trademark Office). He has received two Best Paper Awards in conferences, and two prize awards in natural sciences for research excellence by both China Institute of Communications and Chinese Institute of Electronics.

Zhen Gao (Member, IEEE) Zhen Gao received the B.S. degree in information engineering from the Beijing Institute of Technology, Beijing, China, in 2011, and the Ph.D. degree in communication and signal processing with the Department of Electronic Engineering, Tsinghua University, China, in 2016. He is currently an Assistant Professor with the Beijing Institute of Technology. His research interests are in wireless communications, with a focus on multi-carrier modulations, multiple antenna systems, and sparse signal processing. He was a recipient of the IEEE Broadcast Technology Society 2016 Scott Helt Memorial Award (Best Paper), the Exemplary Reviewer of IEEE COMMUNICATION LETTERS in 2016, IET Electronics Letters Premium Award (Best Paper) 2016, UCET 2020 Best Paper Award, and the Young Elite Scientists Sponsorship Program (2018–2020) from China Association for Science and Technology.

Abstract:
Wireless communication systems turn to exploit large antenna arrays, such as massive multi-input multi-output (MIMO) and reconfigurable intelligent surface (RIS), to achieve the degree of freedom in the space domain. Furthermore, to save the spectrum and hardware resources, integrated sensing and communication (ISAC) has opened up numerous game-changing opportunities by combining state-of-the-art communications and radar sensing. Particularly, in this tutorial, we focus on the application of artificial intelligence (AI) /deep learning (DL) to MIMO-aided communications and sensing. Different from traditional model-driven approaches, AI/DL can help deal with the existing communications and sensing problems from a data-driven perspective by digging the inherent characteristic from the real data. This tutorial aims to provide the audience a general picture of the recent developments in this exciting area by introducing the merging of AI/D and MIMO-aided ISAC systems over various topics, including channel acquisition, signal detection, environment sensing, beamforming design, etc. We will also discuss the challenges of AI/DL and present some interesting future directions.


This tutorial aims to provide a comprehensive overview of the state-of-the-art development in technology, regulation, and theory for “AI-Enhanced MIMO Technologies for Communications and Sensing," and to present a holistic view of research challenges and opportunities in the coming area of next generation large-scale antenna systems. The content of this tutorial will be intended for diverse audience, including researchers working on RIS-assisted communications, massive MIMO, and ISAC, industry peers interested in B5G techniques, and graduate students working in the area of wireless communications. We hope that our tutorial will encourage our valued colleagues to join the community effort to promote the applications of AI/DL techniques to large-scale array-aided wireless communications and sensing.


TU-06: Network Slicing for 6G: Technics, Standards, and Applications

VIRTUAL

Presenter:
Jiajia Liu (Northwestern Polytechnical University, China); Jiadai Wang (Northwestern Polytechnical University, China); Nei Kato (Tohoku University, Japan)

Boigraphy:
Jiajia Liu (SM’15) is a full professor (Vice Dean) at the School of Cybersecurity, Northwestern Polytechnical University, and was a Full Professor (2013-2018) at the School of Cyber Engineering, Xidian University. He has published more than 200 peer-reviewed papers in many high quality publications, including prestigious IEEE journals and conferences. He received 2020 IEEE ComSoc Best YP Award in Academia, 2019 IEEE VTS Early Career Award, IEEE ComSoc Asia-Pacific Outstanding Young Researcher Award in 2017, the Best Paper Awards from many international conferences including IEEE flagship events, such as IEEE GLOBECOM in 2019 and 2016, IEEE WiMob in 2019, IEEE WCNC in 2012 and 2014, IEEE IC-NIDC in 2018. His research interests cover a wide range of areas including wireless and mobile ad hoc networks, space-air-ground integrated networks, intelligent and connected vehicles, mobile/edge/cloud computing and storage, Internet of things security. He is the Chair of IEEE IOT-AHSN TC, and is a Distinguished Lecturer of the IEEE Communications Society and Vehicular Technology Society.

Jiadai Wang (S’17, M’22) is currently an associate professor with the School of Cybersecurity, Northwestern Polytechnical University. Her research interests cover network slicing, software-defined networking, IoT, connected vehicles, and 5G/6G communications.

Nei Kato (F’13) is a full professor and the Dean with Graduate School of Information Sciences (GSIS) and was the Director (2015-2019) of Research Organization of Electrical Communication (ROEC) and the Strategic Adviser (2013) to the President, Tohoku University. He has been engaged in research on computer networking, wireless mobile communications, satellite communications, ad hoc & sensor & mesh networks, UAV networks, smart grid, AI, IoT, Big Data, and pattern recognition. He has published more than 400 papers in prestigious peer-reviewed journals and conferences. He is the Vice-President (Member & Global Activities) of IEEE Communications Society(2018-2021), the Editor-in-Chief of IEEE Transactions on Vehicular Technology(2017-2021), and the Chair of IEEE Communications Society Sendai Chapter. He served as the Editor-in-Chief of IEEE Network Magazine (2015-2017), a Member-at-Large on the Board of Governors, IEEE Communications Society(2014-2016), a Vice Chair of Fellow Committee of IEEE Computer Society(2016), and a member of IEEE Communications Society Award Committee (2015-2017). He has also served as the Chair of Satellite and Space Communications Technical Committee (2010-2012) and Ad Hoc & Sensor Networks Technical Committee (2014-2015) of IEEE Communications Society. He is a Distinguished Lecturer of IEEE Communications Society and Vehicular Technology Society. He is a fellow of The Engineering Academy of Japan, Fellow of IEEE, and Fellow of IEICE.

Abstract:
In the coming 6G era characterized by connected intelligence, global network coverage, as well as new verticals, more application scenarios with specific capabilities will emerge, and there is an urgent need to provide tailored end-to-end service provision. Network slicing is widely recognized as the key enabler for 6G, which can change the network form from “one-size-fits-all” to “one-size-per-service” by dividing the physical network into multiple logical networks as required, to meet differentiated performance metrics and provide customized services. Although the network slicing concept has been explored to a certain extent in 5G, it will be fully expanded and improved in 6G combined with ubiquitous intelligence and various innovative application scenarios. We provide in this tutorial a comprehensive review of recent research works concerning network slicing for 6G from three aspects: technics, standards, and applications. Slicing technics lay the foundation for realizing end-to-end logical isolation and service provision, which involves multiple technical domains such as radio access network (RAN), transport network (TN), core network (CN), and slicing management system. Since ubiquitous intelligence is one of the 6G’s most striking features, we especially emphasize the artificial intelligence (AI)-assisted slicing technics. Cross-domain and cross-vendor standards can provide technical guidance for network slicing, which is also the prelude to its widespread commercialization. The analysis of typical network slicing applications can help understand the key requirements and corresponding solutions for deploying slices, providing valuable references for verticals. Finally, we highlight the challenges faced by network slicing and envision its future evolution.


TU-07: Post-Shannon Communications – Breaking the Shannon Limit for 6G

SUNDAY, DEC 4 8:00 - 11:30  /  LOCATION: Capri IV

Presenter:
Rafael F. Schaefer (Universität Dresden, Germany), Holger Boche, Christian Deppe, and Frank H.P. Fitzek (Universität Dresden, Germany)

Boigraphy:
Rafael F. Schaefer is a Professor and head of the Chair of Information Theory and Machine Learning at Technische Universität Dresden, Germany. He received the Dipl.-Ing. degree in electrical engineering and computer science from the Technische Universität Berlin, Germany, in 2007, and the Dr.-Ing. degree in electrical engineering from the Technische Universität München, Germany, in 2012. From 2013 to 2015, he was a Post-Doctoral Research Fellow with Princeton University. From 2015 to 2020, he was an Assistant Professor with the Technische Universität Berlin, Germany. From 2021 to 2022, he was a Professor with the Universität Siegen, Germany. Among his publications is the recent book Information Theoretic Security and Privacy of Information Systems (Cambridge University Press, 2017). He was a recipient of the VDE Johann-Philipp-Reis Prize in 2013. He received the best paper award of the German Information Technology Society (ITG-Preis) in 2016. He is currently an Associate Editor of the IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY and of the IEEE TRANSACTIONS ON COMMUNICATIONS. He is a Member of the IEEE Information Forensics and Security Technical Committee.

Holger Boche received the Dipl.-Ing. degree in electrical engineering, Graduate degree in mathematics, and the Dr.-Ing. degree in electrical engineering from the Technische Universität Dresden, Germany, in 1990, 1992, and 1994. In 1998, he received the Dr. rer. nat. degree in pure mathematics from the Technische Universität Berlin, Germany. From 2002 to 2010, he was Full Professor in mobile communication networks with the Institute for Communications Systems, Technische Universität Berlin, Germany. In 2004, he became the Director of the Fraunhofer Institute for Telecommunications (HHI). He is currently Full Professor at the Institute of Theoretical Information Technology, Technische Universität München, Germany, which he joined in October 2010. Since 2014, Prof. Boche has been a member and Honorary Fellow of the TUM Institute for Advanced Study, Munich, Germany, and since 2018, a Founding Director of the Center for Quantum Engineering, Technische Universität München, Germany. Since 2021, he has been leading jointly with Frank Fitzek the BMBF Research Hub 6G-life. He was elected member of the German Academy of Sciences (Leopoldina) in 2008 and to the Berlin Brandenburg Academy of Sciences and Humanities in 2009. He is a recipient of the Research Award ``Technische Kommunikation'' from the Alcatel SEL Foundation in October 2003, the ``Innovation Award'' from the Vodafone Foundation in June 2006, and the Gottfried Wilhelm Leibniz Prize from the Deutsche Forschungsgemeinschaft (German Research Foundation) in 2008. He was a co-recipient of the 2006 IEEE Signal Processing Society Best Paper Award and a recipient of the 2007 IEEE Signal Processing Society Best Paper Award. 

Christian Deppe received the Dipl.-Math. degree in mathematics from the Universität Bielefeld,Germany, in 1996, and the Dr.-Math. degree in mathematics from the Universität Bielefeld, Germany, in 1998. He was a Research and Teaching Assistant with the Fakultät für Mathematik, Universität Bielefeld, from 1998 to 2010. From 2011 to 2013 he was project leader of the project ``Sicherheit und Robustheit des Quanten-Repeaters ́ ́ of the Federal Ministry of Education and Research at Fakultät für Mathematik, Universität Bielefeld. In 2014 he was supported by a DFG project at the Institute of Theoretical Information Technology, Technische Universität München. In 2015 he had a temporary professorship at the Fakultät für Mathematik und Informatik, Friedrich- Schiller Universität Jena. Since 2018 he is at the Department of Communications Engineering at the Technische Universität München, Germany. He is project leader of several projects funded by BMBF, DFG, state of Bavaria and industry partners. He is involved in the BMBF Research Hub 6G-life. His current research interests are in the areas of Post-Shannon theory, quantum communication networks, and error-correcting codes with feedback.

Frank H. P. Fitzek is a Professor and chair of the communication networks group at Technische Universität Dresden, Germany, coordinating the 5G Lab Germany. Since 2021, he has been leading jointly with Holger Boche the BMBF Research Hub 6G-life. He received his diploma (Dipl.-Ing.) degree in electrical engineering from the University of Technology - Rheinisch-Westfälische Technische Hochschule (RWTH) - Aachen, Germany, in 1997 and his Ph.D. (Dr.-Ing.) in Electrical Engineering from the Technical University Berlin, Germany in 2002 and became Adjunct Professor at the University of Ferrara, Italy in the same year. In 2003 he joined Aalborg University as Associate Professor and later became Professor. He co-founded several start-up companies starting with acticom GmbH in Berlin in 1999. He has visited various research institutes including Massachusetts Institute of Technology (MIT), VTT, and Arizona State University. In 2005 he won the YRP award for the work on MIMO MDC and received the Young Elite Researcher Award of Denmark. He was selected to receive the NOKIA Champion Award several times in a row from 2007 to 2011. In 2008 he was awarded the Nokia Achievement Award for his work on cooperative networks. In 2011 he received the SAPERE AUDE research grant from the Danish government and in 2012 he received the Vodafone Innovation price. His current research interests are in the areas of wireless and mobile 5G communication networks, mobile phone programming, network coding, cross layer as well as energy efficient protocol design and cooperative networking

Abstract:
Since the breakthrough of Shannon's seminal paper, researchers have worked on codes and techniques that approach the fundamental limits of message transmission. Here, the maximum number of possible messages that can be transmitted scales exponentially with the blocklength of the codewords. We advocate a paradigm change towards Post-Shannon communication that allows the encoding of messages whose maximum number scales double-exponentially with the blocklength! In addition, secrecy comes "for free" in the sense that it can be incorporated without penalizing the transmission rate! This paradigm shift is the study of semantic communication instead of message only transmission. It involves a shift from the traditional design of message transmission to a new Post-Shannon design that takes the semantics of the communication into account going beyond the transmission of pure message bits. Entire careers were built designing methods and codes on top of previous works, bringing only marginal gains in approaching the fundamental limit of Shannon's message transmission. This paradigm change can bring not only marginal but also exponential gains in the efficiency of communication. Within the Post-Shannon framework, this tutorial explores identification codes, embedded security, resilience by design, the exploitation of resources that have been considered useless in the traditional Shannon framework.


TU-08: Internet of Bio-Nano Things: Getting Practical with Molecular Communications

VIRTUAL

Presenter:
Murat Kuscu (Koç University, Turkey), Ozgur B. Akan (Koç University, Turkey)

Boigraphy:
Murat Kuscu is an Assistant Professor and a Marie Skłodowska-Curie Fellow at the Department of Electrical and Electronics Engineering, Koç University, Turkey, where he is also acting as the Director of Nano/Bio/Physical Information and Communications Laboratory (CALICO Lab), and the Assistant Director of the Nanofabrication and Nanocharacterization Center (n2STAR). He received his PhD degrees in engineering from University of Cambridge, UK, in 2020, and in electrical and electronics engineering from Koç University, Turkey, in 2017. His current research interests include molecular communications, nanoscale biosensors, microfluidics, Internet of Bio-Nano Things, and Internet of Everything. He has co-authored more than 35 research articles in these emerging research fields. He received the University of Cambridge CAPE Acorn Post-graduate Research Award 2019, IEEE Turkey Ph.D. Thesis Award 2018, Koç University Academic Excellence Award 2018, and was selected as an IEEE Transactions on Communications Exemplary Reviewer 2020.

Ozgur B. Akan is the Head of the Internet of Everything (IoE) Group at the Department of Engineering, University of Cambridge, and the Director of the Next-generation and Wireless Communications Laboratory (NWCL) at Koç University. He conducts highly advanced theoretical and experimental research on nanoscale, molecular, and neural communications, Internet of Everything, and next-generation wireless communications, and has authored more than 250 articles (with 13500+ citations, h-index of 58). He is an IEEE Fellow and a Turing Fellow. He has been awarded the TUBİTAK Science Award 2020, the AXA Chair in Internet of Everything for 2020-2025, ERC Consolidator Grant for 2014-2019, and ERC Proof-of-Concept Grant for 2018-2019. He also received the ACM NanoCom Outstanding Milestone Award 2019, IEEE NanoTechnology Council Distinguished Lecturership 2017, TÜBİTAK Young Scientist Award 2014, IEEE Communications Society Distinguished Lecturership 2011, IBM Shared University Research (SUR) Award 2011, IEEE Communications Society 2010 Outstanding Young Researcher Award, IBM Faculty Award 2008 & 2010. He is acting as an Editor for Nano Communication Networks Journal (Elsevier). He also acted as a Series Editor for IEEE Communications Magazine, Inaugural Associate Editor for IEEE Networking Letters, Associate Editor for IEEE Transactions on Vehicular Technology (2007-2017), IEEE Transactions on Communications (2013-2017), IET Communications, as the General Chair of IEEE INFOCOM 2017 and ACM MOBICOM 2012, and as the Steering Committee Member for IEEE INFOCOM and ACM NanoCom.

Abstract:
Internet of Everything (IoE) brings a holistic view on the universe regarding it as a multiscale interconnected network of heterogeneous entities ranging from planets down to animals, plants, cells, and molecules. At the center of the IoE lies an emerging ICT framework, the Internet of Bio-Nano Things (IoBNT), which promises for universal connectivity by means of networks of micro/nanoscale artificial and natural/biological devices, i.e., bio-nano things. IoBNT necessitates unconventional communication techniques that can meet the stringent requirements and limitations of bio-nano things and their operating environments. The most promising technique to enable IoBNT is the bio-inspired Molecular Communications (MC), which uses molecules for information transfer. Starting with a discussion on potential IoBNT applications within the broader IoE framework, this tutorial will first provide a critical evaluation of the theoretical advancements in MC research over the last 15 years along the modeling, modulation, and detection aspects. The tutorial will proceed with an overview of the recent experimental studies and findings reported at different scales, highlighting the role of micro-/nano-technologies and synthetic biology tools in building practical MC transceivers and testbeds. In this part, we will detail our recent experimental efforts using graphene and related nanomaterials to develop MC system prototypes. This overview will lead to an extensive discussion on physically relevant challenges revealed by MC experiments, such as those regarding the interference and noise in physiological environments, and modeling and optimization in the face of nonlinearities. A particular focus will be paid to the opportunities enabled by the engineering of ligand-receptor interactions to tackle the adaptivity, multi-access, co-existence and limited data-rate challenges of molecular communication networks. The tutorial will conclude with a discussion on the potential interdisciplinary approaches to overcome the existing challenges.


TU-09: Evolution of NOMA Toward Next Generation Multiple Access

SUNDAY, DEC 4 14:00 - 17:30  /  LOCATION: Capri I

Presenter:
Zhiguo Ding (University of Manchester, UK); Yuanwei Liu (Queen Mary University of London, UK)

Boigraphy:
Zhiguo Ding received his Ph.D degree in Electrical Engineering from Imperial College London in 2005. From Jul. 2005 to Apr. 2018, he was working in Queen's University Belfast, Imperial College, Newcastle University and Lancaster University. Since Apr. 2018, he has been with the University of Manchester as a Professor in Communications. From Sept. 2012 to Sept. 2020, he has also been an academic visitor in Princeton University. Dr Ding' research interests are machine learning, B5G networks, cooperative and energy harvesting networks, and statistical signal processing. He is serving as an Area Editor for the IEEE OJ-COMS, an Editor for IEEE TVT and OJ-SP, and was an Editor for IEEE TCOM, IEEE WCL, IEEE CL and WCMC. He was the TPC Co-Chair for the 6th IET ICWMMN2015, Symposium Chair for International Conference on Computing, Networking and Communications. (ICNC 2016), and the 25th Wireless and Optical Communication Conference (WOCC), and Co-Chair of WCNC-2013 Workshop on New Advances for Physical Layer Network Coding. He received the best paper award in IET Comm. Conf. on Wireless, Mobile and Computing, 2009 and the 2015 International Conference on Wireless Communications and Signal Processing (WCSP 2015), IEEE Communication Letter Exemplary Reviewer 2012, the EU Marie Curie Fellowship 2012-2014, IEEE TVT Top Editor 2017, 2018 IEEE Communication Society Heinrich Hertz Award, 2018 IEEE Vehicular Technology Society Jack Neubauer Memorial Award, and 2018 IEEE Signal Processing Society Best Signal Processing Letter Award. He is a Web of Science Highly Cited Researcher and a Fellow of the IEEE.

Yuanwei Liu received the PhD degree in electrical engineering from the Queen Mary University of London, U.K., in 2016. He was with the Department of Informatics, King’s College London, from 2016 to 2017, where he was a Post-Doctoral Research Fellow. He has been a Senior Lecturer (Associate Professor) with the School of Electronic Engineering and Computer Science, Queen Mary University of London, since Aug. 2021, where he was a Lecturer (Assistant Professor) from 2017 to 2021. Yuanwei Liu is a Senior Editor of IEEE COMMUNICATIONS LETTERS, an Editor of the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS and the IEEE TRANSACTIONS ON COMMUNICATIONS. He serves as the leading Guest Editor for IEEE JSAC special issue on Next Generation Multiple Access (NGMA), a Guest Editor for IEEE JSTSP special issue on Signal Processing Advances for NOMA in Next Generation Wireless Networks. He is a Web of Science Highly Cited Researcher. He received IEEE ComSoc Outstanding Young Researcher Award for EMEA in 2020. He received the 2020 Early Achievement Award of the IEEE ComSoc SPCC and CTTC. He has served as the Publicity Co-Chair for VTC 2019-Fall. He is the leading contributor for “Best Readings for NOMA” and the primary contributor for “Best Readings for RIS”. He serves as the chair of SIG in SPCC Technical Committee on the topic of signal processing Techniques for NGMA, the vice-chair of SIG WTC on the topic of RISE, and the Tutorials and Invited Presentations Officer for Reconfigurable Intelligent Surfaces Emerging Technology Initiative.

Abstract:
User data traffic, especially a large amount of video traffic and small-size internet-of-things packets, has dramatically increased in recent years with the emergence of smart devices, smart sensors and various new applications such as virtual reality and autonomous driving. It is hence crucial to increase network capacity and user access to accommodate these bandwidth consuming applications and enhance the massive connectivity. As a prominent member of the next generation multiple access (NGMA) family, non-orthogonal multiple access (NOMA) has been recognized as a promising multiple access candidate for the sixth-generation (6G) networks. The main content of this tutorial is to discuss the so-called “One Basic Principle plus Four New” concept. Starting with the basic NOMA principle to explore the possible multiple access techniques in non-orthogonal manner, the advantages and drawbacks of both the channel state information based successive interference cancelations (SIC) and quality-of-service based SIC are discussed. Then, the application of NOMA to meet the new 6G performance requirements, especially for massive connectivity, is explored. Furthermore, the integration of NOMA with new physical layer techniques is considered, such as Orthogonal Time Frequency Space, Terahertz, Integrated Sensing and Communications, Visible Light Communication, etc. After that, new application scenarios for NOMA towards 6G are introduced, such as Integrated Terrestrial and Aerial Networks, Reconfigurable Intelligent Surfaces aided Wireless Communications, Robotic Communications, Multi-Layer Video Transmission, E-Health, etc. Finally, the application of machine learning (such as Reinforcement Learning, Deep Learning, Federated Learning, etc.) in NOMA networks is investigated, ushering in the machine learning empowered NGMA era, for making multiple access in an intelligent manner for the next generation networks.


TU-10: Terahertz Communications for 6G and Beyond: How Far Are We?

SUNDAY, DEC 4 14:00 - 17:30  /  LOCATION: Capri II

Presenter:
Josep Miquel Jornet (Northeastern University, United States); Chong Han (Shanghai Jiao Tong University, China); Nan Yang (Australian National University, Australia); Vitaly Petrov (Northeastern University, United States)

Boigraphy:
Prof. Josep M. Jornet works in the Department of Electrical and Computer Engineering at Northeastern University, Boston, MA. He received the Ph.D. degree in Electrical and Computer Engineering from the Georgia Institute of Technology, Atlanta, GA, in 2013. His research interests are in THz-band communication networks, wireless nano-bio-communication networks, and the Internet of Nano-Things. In these areas, he has co-authored more than 180 peer-reviewed scientific publications, 1 book, and has also been granted 5 US patents. These works have been cited over 12,700 times (h-index of 51). He is serving as the lead principal investigator on multiple grants from U.S. federal agencies including the National Science Foundation, the Air Force Office of Scientific Research and the Air Force Research Laboratory. He is a recipient of the National Science Foundation CAREER award and of several other awards from IEEE, ACM, UB and NU. He is a Senior Member of the IEEE, a Member of the ACM, and an IEEE ComSoc Distinguished Lecturer (class of 2022-2023). He is serving as a Vice Chair of IEEE ComSoc RCC SIG on THz Communications, and as an Editor for IEEE Transactions on Communications.

Prof. Chong Han is with Shanghai Jiao Tong University, Shanghai, China, where he is currently an Associate Professor and the Director of the Terahertz Wireless Communications (TWC) Laboratory. He obtained the Master of Science and the Ph.D. degrees in Electrical and Computer Engineering from Georgia Institute of Technology, Atlanta, GA, USA, in 2012 and 2016, respectively. He received 2019–2021 Distinguished TPC Member Award, IEEE International Conference on Computer Communications (INFOCOM) and 2018 Elsevier Nano Communication Network Journal Young Investigator Award, 2018 Shanghai Chenguang Funding Award, and 2017 Shanghai Yangfan Funding Award. He is an editor of Nano Communication Networks (Elsevier) Journal and IEEE Access. He has published 4 book chapters, 45 journal articles, and 54 conference papers, most of which, if not all, are related to THz communications. He is a TPC Co-Chair or General Co-Chair for the 1st–5th International Workshop on Terahertz Communications, in conjunction with IEEE ICC 2019, Globecom 2019, ICC 2020, and ICC 2021. Furthermore, he is serving as a Vice Chair of IEEE ComSoc RCC Special Interest Group (SIG) on THz Communications.

Prof. Nan Yang works in the School of Engineering at the Australian National University, Canberra, Australia. He was awarded the Ph.D. degree in Electronic Engineering from Beijing Institute of Technology, China, in 2011. He received the IEEE ComSoc Asia-Pacific Outstanding Young Researcher Award in 2014, the Top Editor Award from the Transactions on Emerging Telecommunications Technologies in 2017, the eight Exemplary/Top Reviewer Awards from IEEE transactions and letters from 2012 to 2019. Also, he is the co-recipient of Best Paper Awards at the IEEE Globecom 2016 and the IEEE VTC-Spring 2013. He is currently serving on the Editorial Board of IEEE Transactions on Molecular, Biological, and Multi-Scale Communications, IEEE Communications Letters, IEEE Transactions on Wireless Communications, and IEEE Transactions on Vehicular Technology. He was the Guest Editor o¬f ten special issues in international leading journals, the symposium/track chair at flagship conferences such as IEEE ICC and IEEE Globecom, and the TPC co-chair of eight workshops. He is a Senior Member of the IEEE. Since 2020, he has published 1 book chapter and more than 10 journal and conference papers on THz communications.

Dr. Vitaly Petrov works in the Department of Electrical and Computer Engineering at Northeastern University, Boston, MA. Before joining NU in 2022, Vitaly was a Senior Standardization Specialist with Nokia Bell Labs and later Nokia Standards working on supporting extended reality (XR) devices in next-generation cellular networks, as well as on extending New Radio operation toward 60 GHz mmWave band. Vitaly obtained his Ph.D. degree from Tampere University, Finland in 2020. Vitaly was a visiting researcher with University of Texas at Austin, Georgia Institute of Technology, and King’s College London. He is the recipient of several IEEE conference and regional awards. Vitaly’s research interests are in enabling (sub-)terahertz communications for 6G and beyond. He has published 4 book chapters, 35 journal articles, and 30 conference papers (h-index of 29), most of which are related to mmWave and THz communications. Vitaly is a member of the IEEE ComSoc RCC Special Interest Group on THz and a TPC co-chair of the Workshop on Terahertz Communications in conjunction with IEEE Globecom 2022.

Abstract:
Wireless communications in the sub-terahertz and terahertz (THz) bands (or broadly speaking, from 100 GHz up to 10 THz) have been envisioned by both academia and industry as a key enabler of future sixth generation (6G) wireless networks. In the last five years, there has been major progress towards closing the so-called THz gap. Accordingly, the goal of this tutorial is to provide an updated look at the field of THz communications, (i) explaining how some of the many envisioned problems have already been solved and (ii) highlighting the key critical challenges that remain open or have emerged due to unforeseen phenomena. In this tutorial, after a high-level overview of the expected role of the THz band in 6G communications and sensing systems, the state of the art in THz device technologies will be reviewed, identifying the critical performance metrics that need to be considered when designing meaningful communication and sensing solutions. Similarly, the lessons learnt through both physics-based and data-driven channel modelling efforts will be summarized and utilized to drive the design of tailored communication and networking solutions. Then, a comprehensive survey of recent highly innovative solutions and open challenges will be provided, including those related to ultrabroadband physical layer solutions (e.g., waveform design, hybrid beamforming), ultra-directional networking strategies (e.g., interference and coverage analysis, beam discovery and tracking, resource allocation and multiple access, multi-hop relaying), and integration of THz communications with other 6G enablers (e.g., intelligent reflecting surfaces, non-terrestrial networks, machine learning). Emphasis will be given to understand the mathematical tools, simulation platforms, experimental testbeds, and data-sets available to the community. Overall, the proposed tutorial will be beneficial for a wide audience with diverse backgrounds ranging from device, circuits and antenna designers, to channel and physical layer experts, to networking engineers, both in academia and industry.


TU-11: Understanding O-RAN: A Tutorial on Architecture, Interfaces, Algorithms, Security, and Research

SUNDAY, DEC 4 14:00 - 17:30  /  LOCATION: Capri III

Presenter:
Michele Polese (Northeastern University, USA); Leonardo Bonati (Northeastern University, USA); Salvatore D'Oro (Northeastern University, USA); Stefano Basagni (Northeastern University, USA); Tommaso Melodia (Northeastern University, USA)

Boigraphy:
Michele Polese is a Principal Research Scientist at Northeastern University, Boston, since March 2020. He received his Ph.D. at the University of Padova in 2020. His research interests are in the analysis and development of future generations of cellular networks, spectrum sharing and passive/active user coexistence, Open RAN, and the performance evaluation of end-to-end, complex networks. He has contributed to O-RAN technical specifications and submitted responses to multiple FCC and NTIA notice of inquiry and requests for comments. He collaborates with several academic and industrial research partners and was awarded with several best paper awards. He is serving as TPC co-chair for WNS3 2021-2022, as an Associate Technical Editor for the IEEE Communications Magazine, and has organized the Open 5G Forum in Fall 2021.

Leonardo Bonati received his B.S. in Information Engineering and his M.S. in Telecommunication Engineering from University of Padova, Italy in 2014 and 2016, respectively. He is currently pursuing a Ph.D. degree in Computer Engineering at Northeastern University, MA, USA. His research interests focus on 5G and beyond cellular networks, network slicing, and software-defined networking for wireless networks.

Salvatore D'Oro is a Research Assistant Professor at Northeastern University. He received his Ph.D. degree from the University of Catania in 2015. Salvatore is an area editor of Elsevier Computer Communications journal and serves on the Technical Program Committee (TPC) of multiple conferences and workshops such as IEEE INFOCOM, IEEE CCNC, IEEE ICC and IFIP Networking. Dr. D'Oro's research interests include optimization, artificial intelligence, security, network slicing and their applications to 5G networks and beyond. 

Stefano Basagni is a professor at the ECE Department at Northeastern University, in Boston, MA. He holds a Ph.D. in electrical engineering from the University of Texas at Dallas (2001) and a Ph.D. in computer science from the University of Milano, Italy (1998). Dr. Basagni's current interests concern research and implementation aspects of mobile networks and wireless communications systems, wireless sensor networking for IoT (underwater, aerial, and terrestrial), and definition and performance evaluation of network protocols. Dr. Basagni has published over ten dozen of highly cited, refereed technical papers and book chapters. Dr. Basagni served as a guest editor of multiple international ACM/IEEE, Wiley, and Elsevier journals. He has been the TPC co-chair of international conferences.

Tommaso Melodia is the William Lincoln Smith Chair Professor with the Department of Electrical and Computer Engineering at Northeastern University in Boston. He is also the Founding Director of the Institute for the Wireless Internet of Things and the Director of Research for the PAWR Project Office. He received his Ph.D. in Electrical and Computer Engineering from the Georgia Institute of Technology in 2007. Prof. Melodia has served as Associate Editor of IEEE Transactions on Wireless Communications, IEEE Transactions on Mobile Computing, Elsevier Computer Networks, among others. He has served as Technical Program Committee Chair for IEEE Infocom 2018, General Chair for IEEE SECON 2019, ACM Nanocom 2019, and ACM WUWnet 2014. Prof. Melodia is the Director of Research for the Platforms for Advanced Wireless Research (PAWR) Project Office. Prof. Melodia's research on modeling, optimization, and experimental evaluation of Internet of Things and wireless networked systems has been funded by the U.S. NSF, AFRL, ONR, DARPA, and ARL.

Abstract:
The Open Radio Access Network (RAN) paradigm and its embodiment based on the O-RAN Alliance specifications is poised to transform the telecom ecosystem. Created in 2018, the O-RAN Alliance counts already more than 300 members and contributors. It has drafted a complete set of specifications for the O-RAN architecture, interfaces, and platforms, calling for a virtualized and disaggregated RAN whose elements are connected via open interfaces and whose operations are optimized by intelligent controllers. 

Understanding O-RAN, its architecture, interfaces, and tools becomes of paramount importance for researchers and practitioners in the wireless community. With this tutorial we intend to provide a comprehensive introduction to O-RAN, including a clear overview of the potentials of the Open RAN paradigm and of the challenges to its realization. We will also present the tools to be used for experimental research in the O-RAN domain.

The tutorial is organized to provide a deep dive on the O-RAN specifications, describing its architecture, design principles, and the O-RAN interfaces. We will then describe how the O-RAN RAN Intelligent Controllers (RICs) can be used to effectively control and manage 3GPP-defined RANs, with closed-loop control at near-real-time and non-real-time scales. Based on this, we will discuss innovations and challenges that relate to O-RAN networks, including the Artificial Intelligence (AI) and Machine Learning (ML) workflows enabled by the architecture and interfaces along with and security and standardization issues. Finally, we will present experimental platforms for research on O-RAN, and demo OpenRAN Gym, an open toolbox for experimental research on Open RAN. We will discuss recent research results, concluding with an outline of directions for future O-RAN development.


TU-12: Semantic Communications: Transmission Beyond Shannon Paradigm

VIRTUAL

Presenter:
Geoffrey Ye Li (Imperial College London, UK); Zhijin Qin (Queen Mary University of London, UK);  Xiaoming Tao (Tsinghua University, China)

Boigraphy:
Dr. Geoffrey Ye Li is currently a Chair Professor in wireless systems with Imperial College London. Before joining Imperial in 2020, he was with Georgia Institute of Technology for 20 years and AT&T (Bell) Labs - Research for about five years. His general research interests include statistical signal processing and machine learning for wireless communications. In the related areas, he has published over 500 journal and conference papers in addition to over 40 granted patents. His publications have been cited by over 40,000 times and he has been recognized as a Highly Cited Researcher.

Dr. Zhijin Qin is a Lecturer (Assistant Professor) at Queen Mary University of London, UK. Her research interests include semantic communications and sparse signal processing in wireless communications. She is serving as an area editor of IEEE JSAC Series on Machine learning in Communications and Networks, an editor of IEEE Transactions on Communications, IEEE Transactions on Cognitive Communications and Networking, and IEEE Communications Letters. Dr Qin has served as the symposium co-chair for IEEE VTC Fall 2019 and IEEE Globecom 2020/2021. She received the 2017 IEEE Globecom Best Paper Award, the 2018 IEEE Signal Processing Society Young Author Best Paper Award, 2021 IEEE ComSoC SPCC Early Achievement Award, and 2022 IEEE Communications Society Fred W. Ellersick Prize.

Dr. Tao is currently a Full Professor at the Department of Electronic Engineering, Tsinghua University. Her research focuses on semantic coding and computing communications for multimedia. In the related areas, she has published over 120 journal and conference papers in addition to over 40 granted patents.
Dr. Tao was a recipient of the National Science Foundation for Outstanding Youth, from 2017 to 2019, and many national awards, e.g., the 2017 China Young Women Scientists Award, the 2017 Top Ten Outstanding Scientists and Technologists from the China Institute of Electronics, the 2017 First Prize of the Wu Wen Jun A.I. Science and Technology Award, the 2016 National Award for Technological Invention Progress, and the 2015 Science and Technology Award of the China Institute of Communications. She served as the workshop general co-chair for the IEEE INFOCOM 2015, the organization co-chair for the IEEE ICCI*CC 2015/2020, and the volunteer leader for IEEE ICIP 2017. She is currently an editor of IEEE Transactions on Wireless Communications, China Communications, and Pattern Recognition, as well as the scientific editor of Chinese Journal of Electronics.

Abstract:
Shannon and Weaver categorized communications into three levels: 
•    Level A. How accurately can the symbols of communication be transmitted? 
•    Level B. How precisely do the transmitted symbols convey the desired meaning? 
•    Level C. How effectively does the received meaning affect conduct in the desired way? 
In the past decades, researchers primarily focus on level A communications. With the development of cellular communication systems, the achieved transmission rate has been improved tens of thousands of times and the system capacity is gradually approaching the Shannon limit. Semantic communications have been regarded as a promising direction to improve the system efficiency and reduce the data traffic so that to realize the level B or even level C communications. Semantic communications aim to realize the successful semantic information transmission that is relevant to the transmission task at the receiver. In this tutorial, we first introduce the concept of the semantic communications and a general model of it. We then detail the principles and performance metrics of semantic communications. Afterwards, we present the latest work on deep learning enabled semantic communications for different sources, multi-user semantic communication systems, and multimedia semantic coding. Finally, we identify the research challenges in semantic communications.

The intended audience include PhD students, postdocs, and researchers with general background on machine learning and wireless communications.


TU-13: Deep Learning for Physical Layer Security: Towards Context-aware Intelligent Security for 6G Systems

VIRTUAL

Presenter:
Eduard Axel Jorswieck (Technische Universität Braunschweig); Babak Hossein Khalaj (Sharif University of Technology); Mehdi Letafati (Sharif University of Technology)

Boigraphy:
Eduard Axel Jorswieck is the managing director of the Institute for Communications Technology and Full Professor at Technische Universität Braunschweig, Germany. From 2008 until 2019, he was the head of the Chair of Communications Theory and Full Professor at Dresden University of Technology, Germany. His main research interests are in the broad area of communications. He has published more than 150 journal papers, 15 book chapters, 3 monographs, and some 300 conference papers. 
Dr. Jorswieck is a Fellow of IEEE. Since 2017, he has been serving as Editor-in-Chief for the EURASIP Journal on Wireless Communications and Networking. Since 2021, he serves on the editorial board of IEEE Transactions on Communications. From 2011 to 2015, he acted as Associate Editor for IEEE Transactions on Signal Processing. Since 2008, continuing until 2011, he has served as an Associate Editor for IEEE Signal Processing Letters. From 2012 until 2013 he served as Senior Associate Editor for IEEE Signal Processing Letter. Since 2013, he serves as Editor for IEEE Transactions on Wireless Communications. Since 2016, he serves as Associate Editor for IEEE Transactions on Information Forensics and Security. In 2006, he received the IEEE Signal Processing Society Best Paper Award. 

Babak Hossein Khalaj (Senior Member, IEEE) received his B.Sc. degree in Electrical Engineering from Sharif University of Technology, Tehran, Iran, in 1989, and M.Sc. and Ph.D. degrees in Electrical Engineering from Stanford University, Stanford, CA,USA, in 1993 and 1996, respectively. He is currently a Full Professor at Department of Electrical Engineering of Sharif University of Technology and the Director of Centre for Information Systems and Data Science at Sharif University. He has been with the pioneering team at Stanford University, where he was involved in adoption of multi-antenna arrays in mobile networks. Since 1999, he has been a Senior Consultant in the area of data communications, and from 2006 to 2007, a Visiting Professor with CEIT, San Sebastian, Spain. He has co-authored many papers in signal processing and digital communications and holds four U.S. patents. He was the recipient of the Alexander von Humboldt Fellowship from 2007 to 2008 and Nokia Visiting Professor Scholarship in 2018.

Mehdi Letafati received his B.Sc and M.Sc. degrees in Electrical Engineering from Sharif University of Technology, Tehran, Iran, in 2019 and 2021, respectively. He is currently pursuing the Ph.D. degree in Electrical Engineering (communications systems) at Sharif University of Technology. He was a program attendee at Cornell, Maryland, Max-Planck Pre-Doctoral Research School, Saarbrüecken, Germany, in August 2020. His research interests include both the theoretical and practical aspects of learning-based communication security, privacy in data science, and secure digital healthcare. He is also interested in the intersection of deep learning and information theory. He was honored to be ranked 4th among all participants in the Nationwide University Entrance Exam in 2015, and since then he has been a recipient of Iran’s National Elite Foundation’s scholarships. He was a recipient of the Exceptional Talent for outstanding performance during his undergraduate studies. Mehdi serves as a peer reviewer for top IEEE journals, including the IEEE Internet of Things Journal, IEEE Transactions on Signal Processing, and IEEE Wireless Communications Letters. He also served as a TPC member of the 23rd IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC 2022).

Abstract:
Despite the development of different mechanisms to secure the core network of communication systems, the wireless edge of B5G and 6G systems is still vulnerable to security and privacy risks due to the inherent broadcast nature of wireless medium. To overcome this issue, physical layer security (PLS) solutions have been envisioned to be leveraged for 6G networks thanks to their intrinsic capability of being adapted to the communication medium, providing agile security for different scenarios. As the 6G is envisioned to bring device-level intelligence, the capabilities of deep learning (DL) algorithms can be incorporated into PLS protocols, resulting in novel context-aware learning-based secure frameworks. 
In this tutorial, we provide a comprehensive overview on learning-based PLS techniques as one of the key enabler for safeguarding the sixth generation (6G) wireless networks. As a preliminary, we first formulate the PLS framework, and review two main classes of PLS solutions, i.e., key-less and key-based PLS. Then, we address some of the state-of-the-art concepts and protocols in PLS, including “Quality-of-Security” (QoSec), multi-user mMIMO PHY key agreement, and man-in-the-middle (MitM) resilient key generation. In the next part of this tutorial, we take into account the context of communicated data, and focus on learning-based PLS to realize context-aware intelligent solutions against passive and active adversarial attacks. i) For the key-less PLS, we introduce different DL-based approaches for designing wiretap codes and enhancing the QoSec. We further introduce an end-to-end learning-based secure framework that privatizes sensitive data against adversarial neural networks. ii) For the key-based PLS, recurrent-based neural networks and reservoir learning approach are addressed in the context of wireless key generation. Finally, we provide future directions, including the potential use of intelligent PLS solutions for the future e-health services to provide interested attendees with useful insights. Learning-based PLS solutions proposed in this tutorial will shed light on further developments of security-as-a-service (SecaaS) products.


TU-14: Distributed Machine Learning for 6G Networks: A Tutorial

VIRTUAL

Presenter:
Ekram Hossain (University of Manitoba, Canada); Dusit Niyato (Nanyang Technological University, Singapore); Dinh Thai Hoang (University of Technology Sydney, Australia); Shimin Gong (Sun Yat-sen University, Shenzhen, China. )

Boigraphy:
Ekram Hossain (F'15) is a Professor in the Department of Electrical and Computer Engineering at University of Manitoba, Canada. He is a Member (Class of 2016) of the College of the Royal Society of Canada. Dr. Hossain's current research interests include design, analysis, and optimization of wireless communication networks with emphasis on 5G and B5G cellular networks. He has authored/edited several books in these areas. To date, his research works have received more than 31,200 citations in Google Scholar (with h-index = 91).  He has presented numerous invited talks/seminars as well as tutorials in IEEE conferences including IEEE Globecom, ICC, WCNC, and VTC. He was a Distinguished Lecturer of the IEEE Communications Society for two consecutive terms (2012-2015). Currently he is a Distinguished Lecturer of the IEEE Vehicular Technology Society (2017-) and also IEEE Communications Society (2018-). He was listed as a Clarivate Analytics Highly Cited Researcher in Computer Science in 2017, 2018, 2019, and 2020}. Dr. Hossain has won several research awards including the ``2017 IEEE Communications Society Best Survey Paper Award" and the ``2011 IEEE Communications Society Fred Ellersick Prize Paper Award". Currently he serves as the Editor-in-Chief of IEEE Press (2018-).

Dusit Niyato is currently a professor in the School of Computer Science and Engineering and, by courtesy, School of Physical & Mathematical Sciences, Dusit Niyato is currently a professor in the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He received B.E. from King Mongkuk’s Institute of Technology Ladkrabang (KMITL), Thailand in 1999 and Ph.D. in Electrical and Computer Engineering from the University of Manitoba, Canada in 2008. He has published more than 600 technical papers in the area of wireless and mobile networking and is an inventor of four US and German patents. He won the Best Young Researcher Award of IEEE Communications Society (ComSoc) Asia Pacific (AP) and the 2011 IEEE Communications Society Fred W. Ellersick Prize Paper Award. Currently, he is serving as editor-in-chief of IEEE Communications Surveys and Tutorials, an area editor of IEEE Transactions on Wireless Communications (Radio Management and Multiple Access), an associate editor of IEEE Transactions on Mobile Computing, IEEE Transactions on Vehicular Technology, IEEE Transactions on Cognitive Communications and Networking, and IEEE Wireless Communications. He was a guest editor of IEEE Journal on Selected Areas on Communications. He was a Distinguished Lecturer of the IEEE Communications Society for 2016-2017. He was named the 2017-2020 highly cited researcher in computer science. He is a Fellow of IEEE.

Dinh Thai Hoang is currently a faculty member at the School of Electrical and Data Engineering, University of Technology Sydney, Australia. He received his Ph.D. in Computer Science and Engineering from the Nanyang Technological University, Singapore, in 2016. His research interests include emerging topics in wireless communications and networking such as machine learning, edge intelligence, cybersecurity, IoT, and Metaverse. He has received several awards including the Australian Research Council and IEEE TCSC Award for Excellence in Scalable Computing (Early Career Researcher). Currently, he is an Editor of IEEE Transactions on Wireless Communications, IEEE Transactions on Cognitive Communications and Networking, IEEE Transactions on Vehicular Technology, and Associate Editor of IEEE Communications Surveys & Tutorials. 

Shimin Gong is currently an Associate Professor with the School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China. He received the Ph.D. degree in Computer Engineering from Nanyang Technological University, Singapore, in 2014. He was an associate researcher with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. His research interests include Internet of Things (IoT), wireless powered communications, and backscatter communications, with a special focus on optimization and machine learning in wireless communications. He was a recipient of the Best Paper Award on MAC and Cross-layer Design in IEEE WCNC 2019. He has been the Lead Guest Editor of the IEEE Transactions on Wireless Communications, a special issue on Deep Reinforcement Learning on Future Wireless Communication Networks.

Abstract:
The main objective of this tutorial is to provide fundamental background of DML techniques and then study their recent advances to address practical challenges in 6G networks. In particular, we first provide an overview of 6G networks together with emerging applications of machine learning techniques for the development of such networks. We then give a tutorial of DML techniques from basic concepts to advanced models to motivate and provide fundamental knowledge for the audiences. After that, we review advanced DML approaches proposed to address emerging issues in 6G networks, including digital twin, joint radar and data communications, distributed coded learning and intelligent reflecting surface. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.


TU-15: Holographic Radio: A New Paradigm for Ultra-Massive MIMO

VIRTUAL

Presenter:
Lingyang Song (Peking University, Beijing, China); Zhu Han (University of Houston, Houston); Boya Di (Peking University, Beijing, China); Hongliang Zhang (Princeton University, Princeton, USA)

Boigraphy:
Lingyang Song (S’03-M’06-SM’12-F'19) received his PhD from the University of York, UK, in 2007, where he received the K. M. Stott Prize for excellent research. He worked as a research fellow at the University of Oslo, Norway until rejoining Philips Research UK in March 2008. In May 2009, he joined the School of Electronics Engineering and Computer Science, Peking University, and is now a Boya Distinguished Professor. His main research interests include wireless communications, mobile computing, and machine learning. Dr. Song is the co-author of many awards, including IEEE Leonard G. Abraham Prize in 2016, IEEE ICC 2014, IEEE ICC 2015, IEEE Globecom 2014, and the best demo award in the ACM Mobihoc 2015. He received National Science Fund for Distinguished Young Scholars in 2017, First Prize in Nature Science Award of Ministry of Education of China in 2017. Dr. Song has served as a IEEE ComSoc Distinguished Lecturer (2015-2018), an Area Editor of IEEE Transactions on Vehicular Technology (2019-), Co-chair of IEEE Communications Society Asia Pacific Board Technical Affairs Committee (2020-). He is a Clarivate Analytics Highly Cited Researcher.

Zhu Han (S’01–M’04-SM’09-F’14) received the B.S. degree in electronic engineering from Tsinghua University, in 1997, and the M.S. and Ph.D. degrees in electrical engineering from the University of Maryland, College Park, in 1999 and 2003, respectively. From 2000 to 2002, he was an R&D Engineer of JDSU, Germantown, Maryland. From 2003 to 2006, he was a Research Associate at the University of Maryland. From 2006 to 2008, he was an assistant professor in Boise State University, Idaho. Currently, he is a Professor in Electrical and Computer Engineering Department as well as Computer Science Department at the University of Houston, Texas. His research interests include wireless resource allocation and management, wireless communications and networking, game theory, wireless multimedia, security, and smart grid communication. Dr. Han received an NSF Career Award in 2010, the Fred W. Ellersick Prize of the IEEE Communication Society in 2011, the EURASIP Best Paper Award for the Journal on Advances in Signal Processing in 2015, the IEEE Kiyo Tomiyasu Award in 2021, and several best paper awards in IEEE conferences. Dr. Han is top 1% highly cited researcher according to Web of Science since 2017, and AAAS fellow since 2019.

Boya Di (S’17-M’19) obtained her Ph.D. degree from the Department of Electronics, Peking University, China, in 2019. Prior to that, she received the B.S. degree in electronic engineering from Peking University in 2014. She was a postdoc researcher at Imperial College London and is now an assistant professor at Peking University. Her current research interests include holographic radio, reconfigurable intelligent surfaces, multi-agent systems, edge computing, and aerial access networks. She has published over 7 journal papers on the topic of reconfigurable holographic surface aided communications and sensing. She received the best doctoral thesis award from China Education Society of Electronics in 2019. She is also the recipient of 2021 IEEE ComSoc Asia-Pacific Outstanding Paper Award. She serves as an associate editor for IEEE Transactions on Vehicular Technology since June 2020. She has also served as a workshop co-chair for IEEE WCNC 2020&2021.

Hongliang Zhang (S’15-M’19) received the B.S. and Ph.D. degrees at the School of Electrical Engineering and Computer Science at Peking University, in 2014 and 2019, respectively. He was a Postdoctoral Fellow in the Electrical and Computer Engineering Department at the University of Houston, Texas. Currently, he is a Postdoctoral Associate in the Department of Electrical and Computer Engineering at Princeton University, New Jersey. His current research interest includes reconfigurable intelligent surfaces, aerial access networks, optimization theory, and game theory. He received the best doctoral thesis award from Chinese Institute of Electronics in 2019. He is an exemplary reviewer for IEEE Transactions on Communications in 2020. He is also the recipient of 2021 IEEE Comsoc Heinrich Hertz Award for Best Communications Letters and 2021 IEEE ComSoc Asia-Pacific Outstanding Paper Award. He has served as a TPC Member for many IEEE conferences, such as Globecom, ICC, and WCNC. He is currently an Editor for IEEE Communications Letters, IET Communications, and Frontiers in Signal Processing. He has also served as a Guest Editor for several journals, such as IEEE Internet of Things Journal, Journal of Communications and Networks, etc.

Abstract:
To enable a ubiquitous intelligent information network, massive multiple-input multiple-output (MIMO) technology is perceived to enhance the network capacity significantly by exploiting the spatial diversity. However, existing massive MIMO techniques highly rely on phased arrays, which require numerous phase shifters and power amplifiers to construct complex phase-shifting circuits for accurate beamforming. As the physical dimensions of phased arrays scale up, the implementation of ultra-massive MIMO systems in practice becomes prohibitive from both cost and power consumption perspectives. Therefore, there is an urgent need for developing more novel antenna technologies to meet the exponentially increasing data demands in the future 6G and beyond wireless communications. Due to the recent breakthrough of the reconfigurable metamaterial-based antennas, it is now possible to regulate electromagnetic waves via software instead of costly hardware components. As one of the representative metamaterial antennas, reconfigurable holographic surfaces (RHSs) composing of densely packing sub-wavelength metamaterial
elements become one of the most promising alternatives to phased arrays. Specifically, the feeds of the RHS are embedded in the bottom layer of the RHS to generate the incident electromagnetic waves, enabling an ultra-thin structure. The RHS utilizes the metamaterial radiation elements to construct a holographic pattern based on the holographic interference principle. Each element can thus control the radiation amplitude of the incident electromagnetic waves electrically based on the holographic pattern to generate desired directional beams. Such a beamforming technique is also known as holographic beamforming. Benefitted from the advantages of compact design, low power consumption and low cost, RHS realizes continuous or quasi-continuous apertures to enable holographic communications. In this tutorial, we will introduce the unique features of RHSs which enlighten its broad applications to communication and sensing, in a comprehensive way. Related challenges and signal processing techniques will be presented, and a hardware prototype will be shown with implementation details.


TU-16: IEEE 802.11be and Beyond: All You Need to Know about Next-generation Wi-Fi

VIRTUAL

Presenter:
Lorenzo Galati-Giordano (Nokia Bell Labs, Germany);  Giovanni Geraci (Universitat Pompeu Fabra, Spain); Boris Bellalta (Universitat Pompeu Fabra, Spain)

Boigraphy:
Lorenzo Galati Giordano (SM’20) is Senior Research Engineer at Nokia Bell Labs Germany since 2015, producing leading research contributions in the area of radio systems operating in the unlicensed spectrum. Lorenzo has more than 15 years of academical and industrial experience on communication systems, protocols, and standards, resulting in the co-authoring of tens of commercial patents, publications in prestigious books, IEEE journals and conferences, and standard contributions. He was previously R&D System Engineer for Azcom Technology, an Italian SME, from 2010 to 2014, holds a PhD from Politecnico di Milano university in Italy and a post-graduate master's degree in Innovation Management from IlSole24Ore Business School, Italy. Lorenzo’s current focus is on next generation Wi-Fi technologies and reliable low-latency techniques for the unlicensed spectrum.

Giovanni Geraci (SM’19) Giovanni Geraci is with University Pompeu Fabra in Barcelona, where he is an Assistant Professor and the Head of the Telecommunications program. He was previously a Research Scientist with Nokia Bell Labs and holds a Ph.D. from the UNSW Sydney. He serves as Distinguished Lecturer of both the IEEE Communications and Vehicular Technology Societies, is co-inventor of a dozen patents, and received the IEEE ComSoc EMEA Outstanding Young Researcher Award.

Boris Bellalta (SM’13) is a Full Professor at Universitat Pompeu Fabra (UPF), where he heads the Wireless Networking group. His research interests are in the area of wireless networks and performance evaluation, with emphasis on Wi-Fi technologies, and Machine Learning-based adaptive systems. He is currently involved as principal investigator and coordinator in several EU, national and industry funded research projects that aim to push forward our understanding of complex wireless systems, and in particular, contribute to the design of future wireless networks to support XR immersive communications.

Abstract:
What will Wi-Fi be in 2030? As hordes of data-hungry devices challenge its current capabilities, the IEEE strikes again with 802.11be, alias Wi-Fi 7. This brand-new amendment promises a (r)evolution of unlicensed wireless connectivity as we know it, unlocking access to gigabit, reliable and low-latency communications, and reinventing manufacturing and social interaction through digital augmentation. More than that, time-sensitive networking protocols are being put forth with the overarching goal of making wireless the new wired. With the 802.11be standardization process being consolidated and that of its successor about to kick off, we will shed light on what to expect from Wi-Fi in the next decade, placing the spotlight on the must-have features for critical and delay-sensitive applications, and illustrating their benefits through tangible performance results.


TU-17: Towards a Wireless Metaverse:  A Confluence of Extended Reality (XR), Artificial Intelligence (AI) and Semantic Communications

SUNDAY, DEC 4 8:00 - 11:30  /  LOCATION: Capri II

Presenter:
Walid Saad (ECE, Virginia Tech, USA); Christina Chaccour (ECE, Virginia Tech, USA)

Boigraphy:
Walid Saad (S'07, M'10, SM’15, F’19) received his Ph.D degree from the University of Oslo in 2010. He is a Professor at the Department of Electrical and Computer Engineering at Virginia Tech, where he leads the Network sciEnce, Wireless, and Security (NEWS) laboratory. His research interests include wireless networks (5G/6G/beyond), machine learning, game theory, security, semantic communications, cyber-physical systems, and network science. Dr. Saad is a Fellow of the IEEE. He is also the recipient of the NSF CAREER award in 2013 and the Young Investigator Award from the Office of Naval Research (ONR) in 2015. He was the author/co-author of eleven conference best paper awards at WiOpt in 2009, ICIMP in 2010, IEEE WCNC in 2012, IEEE PIMRC in 2015, IEEE SmartGridComm in 2015, EuCNC in 2017, IEEE GLOBECOM in 2018, IFIP NTMS in 2019, IEEE ICC in 2020, IEEE GLOBECOM in 2020, and IEEE ICC in 2022. He is the recipient of the 2015 and 2022 Fred W. Ellersick Prize from the IEEE Communications Society, of the 2017 IEEE ComSoc Best Young Professional in Academia award, of the 2018 IEEE ComSoc Radio Communications Committee Early Achievement Award, and of the 2019 IEEE ComSoc Communication Theory Technical Committee. He was also a co-author of the 2019 IEEE Communications Society Young Author Best Paper and of the 2021 IEEE Communications Society Young Author Best Paper. He was also an IEEE Distinguished Lecturer in 2019-2020. He is the Editor-in-Chief of the IEEE Transactions on Machine Learning in Communications and Networking.

Christina Chaccour (S’17) received the B.E. degree (Summa Cum Laude) in Electrical Engineering from Notre Dame University-Louaize, Lebanon, in 2018 and the M.S. degree in Electrical Engineering from Virginia Tech, Blacksburg, VA, USA, in 2020. She is currently pursuing the Ph.D. degree with the Bradley Department of Electrical and Computer Engineering, Virginia Tech, where her research interests include wireless communications, 5G and 6G networks, extended reality, terahertz frequency bands, machine learning, and semantic communications. She has derived some of the first performance analysis results on the potential of networking at THz frequencies. Christina is the co-founder of the startup Internet of Trees (IOTree); IOTree has won many local and international awards. She has held summer internship positions at Ericsson Inc., Plano, TX, USA, and Cadence Design Systems, Munich GmBh. She was the recipient of the best paper award for her peer-reviewed conference paper at the 10th IFIP Conference on New Technologies, Mobility, and Security (NTMS), Canary Islands, in 2019. Additionally, Christina was the recipient of the exemplary reviewer (fewer than 2%) award from IEEE Transactions on Communications in 2021. She has also served as a reviewer and a technical program committee member for various IEEE transactions and flagship conferences.

Abstract:
The emerging metaverse requires the convergence of multiple technologies ranging from extended reality (XR) to artificial intelligence (AI) and digital twins, that must come together to create a second life. Nonetheless, this metaverse vision will not be fulfilled unless a major leap in today’s wireless and AI technologies is realized to enable a seamless merger of the digital and physical worlds. For example, the metaverse requires a radical paradigm shift from today’s AI-supported networks towards AI-native “reasoning” networks. Particularly, building a fully immersive, hyper spatiotemporal, and self-sustaining digital meta-life requires future wireless networks to be equipped with three fundamental components. First, future networks must carry advanced holographic XR content over the entire virtual-reality spectrum. Second, metaverse-ready wireless networks must support unprecedented quality-of-experience (QoE) requirements across multiple metrics. Third, creating metaverse-ready networks necessitates transforming communication links from a mere transmission medium to a reasoning-based system whereby the transmitter-receiver relationship, viewed as a bit pipe, is transformed into a teacher-apprentice one, whereby the semantics (meaning) of information are being conveyed and AI is integrated across the system. Consequently, in this tutorial, we first investigate the challenges underlying the successful operation of XR services and then scrutinize a suite of technical solutions that range from operating at higher frequency bands to re-engineering the cellular architecture. Then, we expose, in detail, the need for a new breed of generalizable, continual, reasoning-based, and reliable AI. In particular, we scrutinize novel AI mechanisms that can learn complex and intertwined tasks with a high generalizability and specialization, yet can act in a highly reliable and low latency fashion. Moreover, we provide an in-depth exposition on the concept of semantic communications and its underlying challenges and opportunities. We conclude the tutorial with an outlook of open metaverse problems at the confluence of AI, networking, XR, and semantic communications.


TU-18: Deep Learning for the Physical Layer: A Hands-on Experience

VIRTUAL

Presenter:
Jakob Hoydis (NVIDIA, France); Fayçal Aït Aoudia (NVIDIA, France); Sebastian Cammerer (NVIDIA, Germany) 

Boigraphy:
Jakob Hoydis is a Principal Research Scientist at NVIDIA working on the intersection of machine learning and wireless communications. Prior to this, he was head of a research department at Nokia Bell Labs, France, and co-founder of the social network SPRAED. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. He obtained the diploma degree in electrical engineering from RWTH Aachen University, Germany, and the Ph.D. degree from Supéléc, France. From 2019-2021, he was chair of the IEEE COMSOC Emerging Technology Initiative on Machine Learning as well as Editor of the IEEE Transactions on Wireless Communications. Since 2019, he is Area Editor of the IEEE JSAC Series on Machine Learning in Communications and Networks. He is recipient of the 2019 VTG IDE Johann-Philipp-Reis Prize, the 2019 IEEE SEE Glavieux Prize, the 2018 IEEE Marconi Prize Paper Award, the 2015 IEEE Leonard G. Abraham Prize, the IEEE WCNC 2014 Best Paper Award, the 2013 VDE ITG Förderpreis Award, and the 2012 Publication Prize of the Supéléc Foundation. He has received the 2018 Nokia AI Innovation Award, as well as the 2018 and 2019 Nokia France Top Inventor Awards. He is a co-author of the textbook “Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency” (2017). 

Fayçal Aït Aoudia is a Senior Research Scientist at NVIDIA working on the convergence of wireless communications and machine learning. Before joining NVIDIA, he was a research scientist at Nokia Bell Labs, France. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. He obtained the diploma degree in computer science from the Institut National des Sciences Appliquées de Lyon, France, in 2014, and the PhD in signal processing from the University of Rennes 1, France, in 2017. He has received the 2018 Nokia AI Innovation Award, as well as the 2018, 2019, and 2020 Nokia Top Inventor Awards.

Sebastian Cammerer is a Research Scientist at NVIDIA working on the intersection of machine learning and wireless communications. Before joining NVIDIA, he received his PhD in electrical engineering and information technology from the University of Stuttgart, Germany, in 2021. He is one of the maintainers and core developers of the Sionna open-source link-level simulator. His main research topics are machine learning for wireless communications and channel coding. Further research interests include modulation, parallel computing for signal processing, and information theory. He is recipient of the IEEE SPS Young Author Best Paper Award 2019, the Best Paper Award of the University of Stuttgart 2018, the Anton- und Klara Röser Preis 2016, the Rohde&Schwarz Best Bachelor Award 2015, the VDE-Preis 2016 for his master thesis and third prize winner of the Nokia Bell Labs Prize 2019.

Abstract:
In the recent years, machine learning for communications has become one of the most attractive research topics in our community and it is foreseeable that it will play an increasingly important role in the future evolution of 5G as well as the development of 6G. This trend is supported by the recent 3GPP announcement to promote AI/ML as a new study item for the upcoming Release 18, which offers many attractive interdisciplinary research questions at the interface of machine learning, communications engineering, information theory, as well as hardware design. With this background, the objective of our tutorial is the introduction of the key concepts of deep learning and their application to problems in communications, ranging from channel estimation, over a complete neural OFDM receiver, to an entirely neural network-based communications system that does not use any traditional signal processing algorithm. The tutorial is hence a great opportunity to learn about the cutting-edge research in communications and deep learning. Besides the pure theoretical background, a particular focus is put on the practical implementation with state-of-the-art deep learning libraries. We will introduce Sionna, a new open-source software library for GPU-accelerated link-level simulations and 6G research, that has been developed by the tutorial instructors. Sionna enables rapid prototyping of complex communication system architectures and provides native support for the integration of neural networks. The attendees will receive detailed Jupyter notebooks with code examples to deepen their understanding and to quickly explore their own research ideas.


TU-19: Wireless Information and Energy Transfer in the Era of 6G Communications

THURSDAY, DEC 8 8:00 - 11:30  /  LOCATION: Capri III

Presenter:
Ioannis Krikidis (University of Patras, Greece); Constantinos Psomas (University of London)

Boigraphy:
Dr. Ioannis Krikidis received the diploma in Computer Engineering from the Computer Engineering and Informatics Department (CEID) of the University of Patras, Greece, in 2000, and the M.Sc and Ph.D degrees from Ecole Nationale Superieure des Telecommunications (ENST), Paris, France, in 2001 and 2005, respectively, all in electrical engineering. From 2006 to 2007 he worked, as a Post-Doctoral researcher, with ENST, Paris, France, and from 2007 to 2010 he was a Research Fellow in the School of Engineering and Electronics at the University of Edinburgh, Edinburgh, UK. He is currently an Associate Professor at the Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus. He is an IEEE Fellow for contributions to full-duplex radio and wireless-powered communications. His current research interests include wireless communications, cooperative networks, 4G/5G communication systems, wireless powered communications, and secrecy communications. Dr. Krikidis serves as an Associate Editor for IEEE Transactions on Communications, IEEE Transactions on Green Communications and Networking, and IEEE Wireless Communications Letters. He has published over 180 papers in scientific journals and international conferences. He was the recipient of the Research Award Young Researcher from the Research Promotion Foundation, Cyprus, in 2013, as well as the recipient of the IEEE ComSoc Best Young Professional Award in Academia in 2016. He has been recognized by Thomson Reuters as an ISI Highly Cited Researcher 2017, 2018, 2019, 2020, and 2021.

Dr. Constantinos Psomas holds a BSc (Hons) in Computer Science and Mathematics from Royal Holloway, University of London, an MSc in Applicable Mathematics from the London School of Economics, and a PhD in Mathematics from The Open University, UK. He is currently a Research Fellow at the Department of Electrical and Computer Engineering of the University of Cyprus. From 2011 to 2014, he was as a Postdoctoral Research Fellow at the Department of Electrical Engineering, Computer Engineering and Informatics of the Cyprus University of Technology. Dr. Psomas serves as an Associate Editor for the IEEE Wireless Communications Letters and the Frontiers in Communications and Networks. His research activities currently lie in the area of wireless communications with particular focus on wireless powered communications, cooperative networks and full duplex communications.

Abstract:
Conventional energy-constrained wireless systems such as sensor networks are powered by batteries and have limited lifetime. Wireless power transfer (WPT) is a promising technology for energy sustainable networks, where terminals can harvest energy from dedicated electromagnetic radiation through appropriate electronic circuits. The integration of WPT technology into communication networks introduces a fundamental co-existence of information and energy flows; radio-frequency signals are used in order to convey information and/or energy. The efficient management of these two flows through sophisticated networking protocols, signal processing/communication techniques and network architectures, gives rise to a new communication paradigm called wireless powered communications (WPC). In this tutorial, we discuss the principles of WPC and we highlight its main network architectures as well as the fundamental trade-off between information and energy transfer. Several examples, which deal with the integration of WPC in modern communication systems, are presented. Specifically, we study some fundamental network structures such as the MIMO broadcast channel, the interference channel, the relay channel, the multiple-access channel, and ad-hoc networks. The integration of WPC in 6G and beyond is analyzed and discussed through the use of tools from stochastic geometry. Future research directions and challenges are also pointed out.


TU-20: Interplay between Sensing and Communications: Fundamental Limits, Signal Processing, and Prototyping

VIRTUAL

Presenter:
Fan Liu, Southern (University of Science and Technology, China); Christos Masouros (University College London, UK); Yonina C. Eldar (Weizmann Institute of Science, Israel)

Boigraphy:
Fan Liu (MIEEE) is currently an Assistant Professor of the Department of Electronic and Electrical Engineering, Southern University of Science and Technology. He has previously held academic positions in the University College London, first as a Visiting Researcher from 2016 to 2018, and then as a Marie Curie Research Fellow from 2018 to 2020. He was a recipient of the IEEE SPS Young Author Best Paper Award in 2021, the Best Ph.D. Thesis Award of Chinese Institute of Electronics in 2019, the Marie Curie Individual Fellowship in 2018, and has been named as Exemplary Reviewer for several IEEE Journals. He is the Founding Academic Chair of the IEEE ComSoc ISAC Emerging Technology Initiative (ISAC-ETI), an Associate Editor of the IEEE COMML and IEEE OJSP, and a Guest Editor of the IEEE JSAC, IEEE WCM, China Commun, and JCIN. He has served as the organizer of numerous workshops, special sessions, and tutorials related to ISAC, including ICC, GLOBECOM, and ICASSP. He was the TPC Co-Chair of the 2022 IEEE JC&S Symposium, is serving as the Workshop and Special Session Co-Chair of the ISWCS 2022, and will serve as a Track Co-Chair fo WCNC 2024. His research interests include ISAC and vehicular networks.

Christos Masouros (SMIEEE) received the Diploma degree in Electrical and Computer Engineering from the University of Patras, Greece, in 2004, and MSc by research and PhD in Electrical and Electronic Engineering from the University of Manchester, UK in 2006 and 2009 respectively. Since 2019 he is a Full Professor of Signal Processing and Wireless Communications in the Dept. Electrical and Electronic Engineering, University College London. His research interests lie in the field of wireless communications and signal processing with particular focus on Green Communications, Large Scale Antenna Systems, Integrated Sensing and Communications, and interference mitigation techniques. He was the co-recipient of the 2021 IEEE SPS Young Author Best Paper Award. He was the recipient of the Best Paper Awards in the IEEE GlobeCom 2015 and IEEE WCNC 2019 conferences. He is an Editor for IEEE TWC, the IEEE OJSP, and Editor-at-Large for IEEE OJ-COMS. He has been an Editor for IEEE TCOM, IEEE COMML, and a Guest Editor for a number of IEEE JSTSP and IEEE JSAC special issues. He is a founding member and Vice-Chair of the IEEE Emerging Technology Initiative on Integrated Sensing and Communications, and Chair of the IEEE Special Interest Group on Energy Harvesting Communication Networks.

Yonina C. Eldar (FIEEE) is a professor in the Department of Math and Computer Science at the Weizmann Institute of Science, Rehovot, 7610001, Israel, where she heads the Center for Biomedical Engineering and Signal Processing. She is also a visiting professor at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, and at the Broad Institute of MIT and Harvard University, Cambridge, Massachusetts, 02142, USA, and is an adjunct professor at Duke University, Durham, North Carolina. She is a member of the Israel Academy of Sciences and Humanities and heads the Committee for Promoting Gender Fairness in Higher Education Institutions in Israel. She is the editor-in-chief of Foundations and Trends in Signal Processing and serves IEEE on several technical and award committees. She has received many awards for excellence in research and teaching, including the IEEE Signal Processing Society Technical Achievement Award, IEEE/ AESS Fred Nathanson Memorial Radar Award, and IEEE Kiyo Tomiyasu Award. She is a Fellow of IEEE and of EURASIP.

Abstract:

As the standardization of 5G gradually solidifies, researchers are speculating what 6G will be. A common theme in many perspectives is that 6G Radio Access Network (RAN) should become multi-functional. It should serve as edge infrastructure to provide site-specific services for surrounding users, rather than communication-only functionality. Jointly suggested by recent advances from the communications and signal processing communities, radio sensing functionality can be integrated into 6G RAN in a low-cost and fast manner. Therefore, the future cellular network could image and measure the surrounding environment to enable advanced location-aware services, ranging from the physical to application layers. This type of research is typically referred to as Integrated Sensing and Communications (ISAC), which has found applications in numerous emerging areas, including vehicular networks, environmental monitoring, Internet of Things, as well as indoor services such as human activity recognition. 
In this tutorial, we will firstly overview the background and application scenarios of ISAC. As a step further, we will introduce the state-of-the-art research progress on this topic, which consists of 4 technical parts: 1) Fundamental Limits, 2) Waveform Design for ISAC, 3) Applications Supported by ISAC, and 4) ISAC Prototyping. Finally, we will conclude the tutorial by summarizing the future directions and open problems in the area of ISAC.


TU-21: Wireless Blockchain Networks for Applications of Cyber-Physical Systems

VIRTUAL

Presenter:
Lei Zhang (University of Glasgow, U.K); Salil Kanhere (UNSW Sydney, Australia); Xu Li (InterDigital Communications, USA); Chonggang Wang (InterDigital Communications, USA)

Boigraphy:
Lei Zhang is a Senior Lecturer (Associate Professor) at the University of Glasgow, U.K. He received his Ph.D. from the University of Sheffield, U.K. His research interests include wireless communication systems and networks, blockchain technology, data privacy and security, radio access network slicing (RAN slicing), Internet of Things (IoT), etc. He has 20 patents granted/filed in more than 30 countries/regions including US/UK/EU/China/Japan etc. Dr Zhang has published 3 books and 150+ peer-reviewed papers. He is an associate editor of IEEE Internet of Things (IoT) Journal, IEEE Wireless Communications Letters and Digital Communications and Networks. He has delivered tutorials in IEEE ICC'20, PIMRC'20, VTC Fall'21, ICBC'21 and EUSIPCO'21. Dr Zhang has been working on wireless blockchain networks for the last years and have published 40+ publications. Dr Zhang is the founding Chair of IEEE Special Interest Group on Wireless Blockchain Networks. Dr Zhang's research on blockchain is broadly covered by media including BBC and Bloomberg.

Salil Kanhere is a Professor in the School of Computer Science and Engineering at UNSW Sydney, Australia. He received his MS and PhD in Electrical Engineering from Drexel University, Philadelphia. His research interests include pervasive computing, Internet of Things, cyber-physical systems, blockchain, cybersecurity and applied machine learning. He has published over 250 peer-reviewed articles and delivered over 30 keynote talks and tutorials on these topics. He has received 8 Best Paper awards. His h-index is 47 and his research has received over 9700 citations. He has co-authored a book on Blockchain for Cyberphyiscal Systems to be published by Artech House in 2020. He is a contributing research staff at CSIRO’s Data61 and has held visiting positions at the Institute of Infocomm Research Singapore, Technical University Darmstadt, University of Zurich and Graz University of Technology. Salil is an ACM Distinguished Speaker and a Humboldt Research Fellow. He is Editor in Chief of Ad Hoc Networks and Area Editor for IEEE Transactions on Network Management and Service, Pervasive and Mobile Computing and Computer Communications. He regularly serves on the organizing committee of a number of IEEE and ACM international conferences (examples include PerCom, MobiSys, CPS-IOT Week, WoWMoM, LCN, MSWiM). He is the General Chair for the IEEE International Conference on Blockchain and Cryptocurrency (ICBC 2021). Salil is a Senior Member of the IEEE and ACM.

Xu Li currently is a senior staff engineer at InterDigital Communications Inc. His research interests include 3GPP wireless systems, Internet-of-Things (IoT), blockchain technology, and data semantics. He has published technical papers on mainstream international journals and conferences, such as IEEE INFOCOM, IEEE TPDS, IEEE JSAC, IEEE Wireless Communications, IEEE IoT Journal, etc. and has been on the technical program committee of major technical conferences such as IEEE Globecom, IEEE ICC, IEEE WCNC, etc. His current major activities include wireless system standardization (such as 3GPP, oneM2M, IETF, W3C, etc.) and he has more than 80 US approved/pending US patent applications.

Chonggang Wang is a Principal Engineer at InterDigital. He has 20+ years of experience in the field of communications, networking, and computing including research, development and standardization of Internet technologies, Internet of Things (IoT) architecture and protocols, cellular and short-range wireless technologies. Chonggang currently leads a technical team in the Technology Evolution and Prototyping department of InterDigital’s Research and Innovation Wireless Lab. In this role Chonggang and his team focus on research, innovation, and standardization of blockchain technology and its applications for future communications and computing systems (e.g., 5G/6G, decentralized machine learning, federated learning). Chonggang actively engages in collaborations with industry and leading universities/institutions to explore future networking and networked systems. He participates in industry standardization activities with ETSI, IETF, oneM2M, 3GPP, and IEEE; he is the rapportuer of ETSI ISG PDL-009 on ``PDL for Federated Data Management" and PDL-013 on ``PDL for Supporting Distributed Data Management". His research interests include blockchain and distributed ledger technologies, quantum internet, and intelligent IoT. He is a Fellow of the IEEE for his contributions to IoT enabling technologies. He's the founding Editor-in-Chief of IEEE IoT Journal and is currently the Editor-in-Chief of IEEE Network - The Magazine of Global Internetworking.

Abstract:
Due to its salient features including decentralization, anonymity, security, trust, and auditability, blockchain has attracted tremendous attention to address the challenges in wireless cyber-physical system (CPS) that encompasses a broad range of devices capable of sensing the environment and communicating with others. Driving by emerging technologies (e.g., 5G, industrial internet of things (IoT), artificial intelligence (AI)), more and more CPS applications are wireless connected. However, most blockchain systems are designed in a stable wired communication network running in advanced devices under the assumption of sufficient communication resource provision. Constrained by the highly dynamic wireless channel and scarce frequency spectrum, communication can significantly affect blockchain applications' key performance metrics such as security, transaction throughput, latency, and scalability. This in-depth tutorial will cover blockchain technologies, applications, and standards for wireless CPS. Specifically, we will start from presenting wireless blockchain networks (WBN) under various commonly used consensus mechanisms (CMs) and analyzing and demonstrating how much communication resource is needed to run such a network for wireless CPS; then, selected blockchain applications in wireless CPS will be covered; finally, relevant blockchain standards will be presented. In particular, we will answer the following questions: 
•    What is the role of communication and the procedures in WBN under various commonly used CMs (e.g., PoW, PBFT, Raft), with different network typologies (e.g., mesh, tree, etc.) and communication protocols (e.g., grant-based, contention-based)? 
•    What is the analytical relationship between blockchain performance and communication resource provision in different communication protocols for different CPS applications?  
•    How can we use blockchain to solve the most imperative challenges we are facing in typical CPS such as smart vehicles, supply chain, healthcare etc.?  
•    How can industry standards promote the adoption and deployment of blockchain technologies for wireless CPS applications; what is the current status of blockchain ecosystem including standardization?


TU-22: Scalable, accurate, and privacy-preserving localization in B5G Wireless Networks

VIRTUAL

Presenter:
Andreas F. Molisch (University of Southern California); Daoud Burghal (Samsung Research America); and Lei Chu (University of Southern California, USA)

Boigraphy:
Andreas F. Molisch is the Solomon Golomb – Andrew and Erna Viterbi Chair Professor at the University of Southern California. He previously was at TU Vienna, AT&T (Bell) Labs, Lund University, and Mitsubishi Electric Research Labs. His research interest is wireless communications, with emphasis on wireless propagation channels, multi-antenna systems, ultrawideband signaling and localization, novel modulation methods, and caching for wireless content distribution. He is the author of four books, 21 book chapters, more than 280 journal papers, 380 conference papers, as well as 70 granted patents and many standards contributions. He is a Fellow of the National Academy of Inventors, IEEE, AAAS, and IET, as well as Member of the Austrian Academy of Sciences, an IEEE Distinguished Lecturer and recipient of numerous awards. 

Daoud Burghal received the B.S. degree in electrical engineering from the University of Jordan, Amman, Jordan, in 2007, the M.S. degree in electrical engineering and statistics from the University of Southern California, Los Angeles, CA, USA, and the Ph.D. degree in electrical engineering from the University of Southern California in 2019. After his Ph.D. degree, he was a Post-Doctoral Scholar with the WiDeS Laboratory. Later, he was a Wireless Research and Development System Engineer at Qualcomm. In 2022, he joined Samsung Research America as an AI Research Engineer. His research interests include different areas of wireless communications, AI-assisted communication, and joint communication and learning. 

Lei Chu (IEEE Senior Member) is a full-time research scholar at The University of Southern California, Los Angeles, USA. Before that, he was a research associate at the School of Electronics, Information and Electrical Engineering, Shanghai Jiao Tong University, where he defended his Ph.D. degree in Dec. 2019. He was a visiting scholar at the University of Tennessee, Knoxville, in 2019. His current research interests include integrating information theory into neural network optimization and extending them into wireless communications and intelligent sensing applications. He contributed to three book-chapters (one in a textbook), authored over fifty papers in refereed journals/conferences, and issued over ten patents. He serves as a regular reviewer with over sixty peer reviews for twenty journals. He received the Outstanding Master's Thesis Award in 2015, the Outstanding Ph.D. Graduate Award in Mar. 2020, and the International Postdoctoral Exchange Fellowship in Jun. 2020. He serves on a Technical Program Committee for two international conferences and is the Leading Guest Editor for one Special Issue on Mixed Reality Wireless Sensing in Applied Sciences. He is dedicated to reproducible research and has made many codes publicly available.

Abstract:
In the past decades, localization through wireless networks was considered an auxiliary service with limited use case scenarios. However, driven by the considerable commercial interest in location-based services, the integration of communication and localization is being actively discussed for future wireless networks and is envisioned as one of the critical enablers in Beyond 5G networks (B5G). Although there are numerous localization solutions, they may have limitations on their scalability and expected accuracy or may not consider privacy concerns. We provide a cutting-edge tutorial on Scalable, Accurate, and Privacy-preserving (SAP) localization in B5G wireless networks. This tutorial covers the fundamentals of Artificial Intelligence (AI) based localization and its challenges in B5G networks. In particular, we discuss relevant wireless signal features, some effective feature representations, and feature accessibility in wireless systems. We further discuss suitable AI models with in-depth discussions of some advanced AI techniques for SAP solutions. Moreover, we elaborate on the novel deep domain-adaption-based AI techniques, promising enabling technologies that combine the knowledge learned from a known environment (or states) and the limited access to information in the new environment. Furthermore, various domain adaptation technologies will be introduced and discussed for SAP localization. Lastly, we conclude this tutorial and discuss some interesting future research directions.


TU-23: Edge Artificial Intelligence for 6G: Scalability, Trustworthiness, and Applications

VIRTUAL

Presenter:
Yuanming Shi (ShanghaiTech University, China); Yong Zhou (ShanghaiTech University, China); Youlong Wu (ShanghaiTech University, China); Dingzhu Wen (ShanghaiTech University, China)

Boigraphy:
Yuanming Shi received the B.S. degree in electronic engineering from Tsinghua University, Beijing, China, in 2011. He received the Ph.D. degree in electronic and computer engineering from The Hong Kong University of Science and Technology (HKUST), in 2015. Since September 2015, he has been with the School of Information Science and Technology in ShanghaiTech University, where he is currently a tenured Associate Professor. He visited University of California, Berkeley, CA, USA, from October 2016 to February 2017. His research areas include optimization, statistics, machine learning, and their applications to 6G, IoT, and AI. Dr. Shi is a recipient of the 2016 IEEE Marconi Prize Paper Award in Wireless Communications, and the 2016 Young Author Best Paper Award by the IEEE Signal Processing Society. He also received the 2021 IEEE ComSoc Asia-Pacific Outstanding Young Researcher Award. He is an editor of IEEE Transactions on Wireless Communications and IEEE Journal on Selected Areas in Communications. 

Yong Zhou is currently an assistant professor at the School of Information Science and Technology at ShanghaiTech University. From 2015 to 2017, he worked as a Post-Doctoral Research Fellow in the Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada. He received the PhD degree from the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada, in 2015. He served as the TPC track co-chair of IEEE VTC 2020 Fall and the general co-chair of IEEE ICC 2022 workshop on edge artificial intelligence for 6G. His research interests include federated learning, B5G, and IoT. 

Youlong Wu obtained his B.S. degree in electrical engineering from Wuhan University, Wuhan, China, in 2007. He received the M.S. degree in electrical engineering from Shanghai Jiaotong University, Shanghai, China, in 2011. In 2014, he received the Ph.D. degree at Telecom ParisTech, in Paris, France. In December 2014, he worked as a postdoc at the Institute for Communication Engineering, Technical University Munich (TUM), Munich, Germany. In 2017, he joined the School of Information Science and Technology at ShanghaiTech University. He obtained the TUM Fellowship in 2014 and is an Alexander von Humboldt research fellow. His research interests in Communication Theoy, Information Theory and its applications e.g., coded caching, distributed computation, and machine learning.

Dingzhu Wen received Bachelor degree and Master degree from Zhejiang University in 2014 and 2017, respectively, and received Ph. D. degree from The University of Hong Kong in 2021. Subsequently, he joined ShanghaiTech University. Currently, he is an assistant professor at School of Information Science and Technology. His research interests include edge intelligence, integrated sensing and communication, over-the-air computation, in-band full-duplex communications, etc.

Abstract:
The boosting of artificial intelligence (AI) services is driving 6G to revolutionize wireless from “connected things” to “connected intelligence”. However, deep learning-based AI systems generally suffer from excessive latency, high energy consumption, and severe network congestion in both the training and inference phases. By integrating sensing, communication, computation, and intelligence, edge AI as a disruptive technology has the potential to improve the effectiveness, scalability, and trustworthiness. However, edge AI may incur a large volume of traffic over wireless networks and cause privacy leakage. Hence, it is critically important to develop decentralized optimization algorithms, advanced communication techniques, effective resource allocation methods, and holistic system architectures. This tutorial aims to present recent advances in decentralized optimization, information theoretical, and wireless networking technologies for scalable and trustworthy edge AI, followed by discussing practical applications. In particular, decentralized zeroth/first/second-order optimization methods with various network topologies and data structures, as well as fair, robust, private and explainable learning approaches will be presented to achieve scalability and trustworthiness for edge AI models and algorithms. Task-oriented coding strategies, disruptive network architectures, advanced communication techniques, and service-aware resource allocation algorithms will be presented. Finally, emerging edge AI applications, and software and hardware platforms will further be introduced.


TU-24: Wireless Channel Measurements, Characteristics Analysis, and Models Towards 6G

VIRTUAL

Presenter:
Cheng-Xiang Wang (Southeast University, China);  Jie Huang (Southeast University, China); Haiming Wang (Southeast University, China); Harald Haas(University of Strathclyde, UK)

Boigraphy:
Prof. Cheng-Xiang Wang received the B.Sc. and M.Eng. degrees in communication and information systems from Shandong University, China, in 1997 and 2000, respectively, and the Ph.D. degree in wireless communications from Aalborg University, Denmark, in 2004. He has been with Heriot-Watt University, Edinburgh, United Kingdom, since 2005 and became a professor in 2011. In 2018, he joined Southeast University, China, and Purple Mountain Laboratories, China, as a professor. He is now the Executive Dean of the School of Information Science and Engineering, Southeast University. He has authored 4 books, 3 book chapters, and more than 470 papers in refereed journals and conference proceedings, including 25 highly cited papers. He has also delivered 24 invited keynote speeches/talks and 13 tutorials in international conferences. His current research interests include wireless channel measurements and modeling, 6G wireless communication networks, and electromagnetic information theory. He is a Member of the Academia Europaea (The Academy of Europe), a Member of the European Academy of Sciences and Arts (EASA), a Fellow of the Royal Society of Edinburgh (FRSE), IEEE, IET, and China Institute of Communications (CIC), an IEEE Communications Society Distinguished Lecturer in 2019 and 2020, and a Highly-Cited Researcher recognized by Clarivate Analytics in 2017-2020. He is currently an Executive Editorial Committee Member of the IEEE TWC. He has served as an Editor for over ten international journals. He has served as a TPC Member, a TPC Chair, and a General Chair for more than 80 international conferences. He received 14 Best Paper Awards.

Dr. Jie Huang received the B.E. degree in Information Engineering from Xidian University, China, in 2013, and the Ph.D. degree in Information and Communication Engineering from Shandong University, China, in 2018. From Oct. 2018 to Oct. 2020, he was a Postdoctoral Research Associate in the National Mobile Communications Research Laboratory, Southeast University, China, supported by the National Postdoctoral Program for Innovative Talents. From Jan. 2019 to Feb. 2020, he was a Postdoctoral Research Associate in Durham University, U.K. Since Mar. 2019, he is a part-time researcher in Purple Mountain Laboratories, China. Since Nov. 2020, he is an Associate Professor in the National Mobile Communications Research Laboratory, School of Information Science and Engineering, Southeast University, China. He has authored and co-authored more than 50 papers in refereed journals and conference proceedings. He received 3 Best Paper Awards from WPMC 2016, WCSP 2020, and WCSP 2021. He has also delivered 4 tutorials in IEEE/CIC ICCC 2021, IEEE PIMRC 2021, IEEE ICC 2022, and IEEE VTC 2022 Spring. His research interests include millimeter wave, massive MIMO, reconfigurable intelligent surface channel measurements and modeling, wireless big data, and 6G wireless communications.

Prof. Haiming Wang received the B.Eng., M.S., and Ph.D. degrees in Electrical Engineering from Southeast University, Nanjing, China, in 1999, 2002, and 2009, respectively. Since 2002, he has been with the State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, China, and he is currently a distinguished professor. He is also a part-time professor with the Purple Mountain Laboratories, Nanjing, China. In 2008, he was a Visiting Scholar with the Blekinge Institute of Technology (BTH), Sweden. He has authored and co-authored over 50 journal papers in IEEE TAP and other peer-reviewed academic journals. He has authored and co-authored over more than 70 patents and 52 patents have been granted. He was awarded twice for contributing to the development of IEEE 802.11aj by the IEEE Standards Association in 2018 and 2020. He received the first-class Science and Technology Progress Award of Jiangsu Province of China in 2009 and was awarded for contributing to the development of IEEE 802.11aj by the IEEE-SA in 2018. His current research interests include AI-powered antenna and radiofrequency technologies (iART), AI-powered channel measurement and modeling technologies (iCHAMM), and integrated communications and sensing (iCAS). He served as the TPC member or the session chair of many international conferences such as IEEE ICCT 2011, IEEE IWS 2013, and IEEE VTC 2016.

Prof. Harald Haas received the Ph.D. degree in wireless communications from the University of Edinburgh, Edinburgh, U.K., in 2001. He is the Director of the LiFi Research and Development Centre at the University of Strathclyde. He is also the Initiator, co-founder and Chief Scientific Officer of pureLiFi Ltd. He has authored 600 conference and journal papers, including papers in Science and Nature Communications. His main research interests are in optical wireless communications, hybrid optical wireless and RF communications, spatial modulation, and interference coordination in wireless networks. His team invented spatial modulation. He introduced LiFi to the public at an invited TED Global talk in 2011. He gave a second TED Global lecture in 2015 on the use of solar cells as LiFi data detectors and energy harvesters. In 2016, he received the Outstanding Achievement Award from the International Solid State Lighting Alliance. In 2019 he was recipient of IEEE Vehicular Society James Evans Avant Garde Award. Haas was elected a Fellow of the Royal Society of Edinburgh (RSE) in 2017. In the same year he received a Royal Society Wolfson Research Merit Award and was elevated to IEEE Fellow. In 2018 he received a three-year EPSRC Established Career Fellowship extension and was elected Fellow of the IET. He was elected Fellow of the Royal Academy of Engineering (FREng) in 2019.

Abstract:
The proposed tutorial is intended to offer a comprehensive and in-depth course to communication professionals/academics, aiming to address recent advances and future challenges on channel measurements and models for sixth generation (6G) wireless communication systems. Network architecture and key technologies for 6G that will enable global coverage, all spectra, and full applications will be first discussed. Channel measurements and non-predictive channel models are then reviewed for challenging 6G scenarios and frequency bands, focusing on shortwave, millimeter wave (mmWave), terahertz (THz), and optical wireless communication channels under all spectra, satellite, unmanned aerial vehicle (UAV), maritime, and underwater acoustic communication channels under global coverage scenarios, and high-speed train (HST), vehicle-to-vehicle (V2V), ultra-massive multiple-input multiple-output (MIMO), reconfigurable intelligent surface (RIS), industry Internet of things (IoT), and orbital angular momentum (OAM) communication channels under full application scenarios. New machine learning based predictive channel models will also be investigated. A non-predictive 6G pervasive channel model will then be proposed, which is expected to serve as a baseline for future standardized 6G channel models. Future research challenges and trends for 6G channel measurements and models will be discussed in the end of the tutorial.


TU-25: Ultra-Dense LEO Satellite-based Communication Systems: A Tractable Modelling Technique

THURSDAY, DEC 8 14:00 - 17:30  /  LOCATION: Capri I

Presenter:
Mustafa A. Kishk (Maynooth University, Ireland); Mohamed-Slim Alouini (King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia)

Boigraphy:
Mustafa A. Kishk received the B.Sc. and M.Sc. degrees from Cairo University, Giza, Egypt, in 2013 and 2015, respectively, and the Ph.D. degree from Virginia Tech, Blacksburg, VA, USA, in 2018. He is an assistant professor at the Electronic Engineering Department, Maynooth University, Ireland. Before that, he was a Postdoctoral Research Fellow with the Communication Theory Laboratory, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia. His current research interests include stochastic geometry, energy harvesting wireless networks, UAV-enabled communication systems, and satellite communications.

Mohamed-Slim Alouini was born in Tunis, Tunisia. He received the Ph.D. degree in Electrical Engineering from the California Institute of Technology (Caltech) in 1998. He served as a faculty member at the University of Minnesota then in the Texas A&M University at Qatar before joining in 2009 the King Abdullah University of Science and Technology (KAUST) where he is now a Distinguished Professor of Electrical and Computer Engineering. Prof. Alouini is a Fellow of the IEEE and of the OSA. He is currently particularly interested in addressing the technical challenges associated with the uneven distribution, access to, and use of information and communication technologies in far-flung, rural, low-density populations, low-income, and/or hard-to-reach areas.

Abstract:
We are witnessing an unprecedented boost in the space industry. The significant technological advances in the industry of low earth orbit (LEO) satellites have opened the door to a new realm of LEO-based applications. One key application is providing internet broadband services to people everywhere around the globe, which is considered a significant step towards resolving the digital divide problem. The main driver to achieve such satellite-based global connectivity is deploying large numbers of LEO satellites at a set of altitudes, ranging from 300 km to 1500 km, to ensure that every part of the earth will be covered by at least one satellite at all times. Given that we have multiple competing companies launching various constellations with diverse altitudes and numbers of satellites, we can envision a set of spheres concentric with the earth with large numbers of LEO satellites distributed on the surfaces of each of these spheres. Due to the fundamental difference between these novel communication systems, especially the spatial distribution of the communication nodes, and the typical terrestrial communication networks, we need to think of creative techniques to enable mathematically analyzing such communication systems. In this tutorial, we discuss a recently proposed mathematical framework that enables tractable analysis of LEO satellite-enabled communication systems while capturing the influence of satellites’ numbers and altitudes as well as the spatial distribution of earth stations. Firstly, we describe how this stochastic geometry-based framework is modelled and discuss its accuracy. Next, we provide a detailed example where this framework can be used for coverage analysis. We then introduce and discuss integrated space-aerial-terrestrial networks. Finally, we discuss how this framework can be used to study routing and end-to-end latency analysis in such networks. Realistic values from existing constellations, such as OneWeb and Starlink, are further used as case studies in this tutorial.


TU-26: Compute-Caching-Communication Integration for Efficient Delivery of Metaverse Experiences

THURSDAY, DEC 8 14:00 - 17:30  /  LOCATION: Capri II

Presenter:
Andreas F. Molisch (University of Southern California); Jaime Llorca (New York University Tandon School of Engineering); Antonia M. Tulino (Universit ́a degli Studi di Napoli Federico II); Yang Cai (University of Southern California)

Boigraphy:
Andreas F. Molisch is the Solomon Golomb – Andrew and Erna Viterbi Chair Professor at the University of Southern California. He previously was at TU Vienna, AT&T (Bell) Labs, Lund University, and Mitsubishi Electric Research Labs. His research interest is wireless communications, with emphasis on wireless propagation channels, multi-antenna systems, ultrawideband signaling and localization, novel modulation methods, and caching for wireless content distribution. He is the author of four books, 21 book chapters, more than 280 journal papers, 380 conference papers, as well as 70 granted patents and many standards contributions. He is a Fellow of the National Academy of Inventors, IEEE, AAAS, and IET, as well as Member of the Austrian Academy of Sciences, an IEEE Distinguished Lecturer and recipient of numerous awards.

Jaime Llorca is a Research Professor and Technology Consultant with the New York University Tandon School of Engineering. He previously held a Senior Research Scientist position with the Network Algorithms Group at Nokia Bell Labs, a Research Scientist position with the End-to-End Networking Group at Alcatel-Lucent Bell Labs, and a post-doctoral position with the Center for Networking of Infrastructure Sensors, College Park, Maryland. He received M.S. and Ph.D. degrees in Electrical and Computer Engineering from the University of Maryland, College Park. His research interests are in the field of network algorithms, optimization, machine learning, and distributed control, with applications to next-generation communication networks, distributed/edge cloud, end-to-end service orchestration, and content distribution. He has made fundamental contributions to the mathematics of content delivery and distributed cloud networks, including pioneering cooperative caching, network coding, and cloud network control algorithms. He has authored more than 100 peer-reviewed publications, 3 book chapters, and 20 patents. He currently serves as Associate Editor for the IEEE/ACM Transactions on Networking. He is a recipient of the 2007 IEEE ISSNIP Best Paper Award, the 2016 IEEE ICC Best Paper Award, and the 2015 Jimmy H.C. Lin Award for Innovation.

Antonia M. Tulino is a full professor at Universit ́a degli Studi di Napoli Federico II. She was previously at CWC in Oulu, Princeton University, Bell Labs, and Universit ́a degli studi del Sannio. Since 2019, she also holds a Research Professor position with the New York University Tandon School of Engineering and is the Scientific Director of the 5G Academy, Italy, jointly organized by the University of Napoli and leading ICT companies. Her research interests lay in the area of communication networks approached with the complementary tools provided by signal processing, information theory, and random matrix theory. She is the author of one monograph, 10 book chapters, more than 70 journal papers, and 160 conference papers, as well as more than 15 patents, 4 of which licensed. Since 2013, she is IEEE Fellow. She has received several paper awards, including the 2009 Stephen O. Rice Prize in the Field of Communications Theory. She has been chair of the IEEE fellow committee for the IT Society. She was recipient of the UC3M-Santander Chair of Excellence from 2018 to 2019 and selected by the National Academy of Engineering for the Frontiers of Engineering program in 2013. 

Yang Cai is a Ph.D. candidate at the University of Southern California. His research interests include stochastic network optimization, joint 3C optimization, and next-generation services (e.g., autonomous driving, augmented/virtual reality).

Abstract:
We are entering a rapidly unfolding future driven by the proliferation of highly distributed real-time interactive services, with applications to system automation (e.g., smart transportation/grids/factories/cities) and Metaverse experiences (e.g., augmented/virtual reality, gaming, immersive media), that impose unprecedented communication, computation, and storage requirements on the hosting infrastructure. In this context, the design of network control policies capable of joint compute-caching-communication (3C) resource orchestration and end-to-end flow control, is of key importance for the integrated operation of distributed compute- and cache-enabled devices forming part of a universal networked compute platform supporting next-generation applications. 

This tutorial provides a comprehensive review of state-of-the-art models, methods, and algorithms for the design of integrated 3C network optimization and control policies for the efficient delivery of a wide class of next-generation interactive- and resource-intensive applications that we simply refer to as Metaverse applications. New service graph models are presented to accurately characterize the complex service composition, resource-intensive nature, and quality of experience (QoE) requirements of Metaverse applications, as well as the distributed, dynamic, and heterogeneous 3C resource nature of the hosting infrastructure. A cloud network flow optimization and control framework is then described in order to formalize the key elements of end-to-end cloud network optimization and control policy design, including system states, observations, utility functions, QoE constraints, and optimization/control actions in 3C networks. The tutorial will illustrate key generalizations of powerful network optimization and control methods such as multi-commodity-flow, network information flow, Lyapunov drift-plus-penalty control, and reinforcement learning, to 3C-integrated networks.


TU-27: Open RAN Security and Privacy: Opportunities and Challenges

THURSDAY, DEC 8 14:00 - 17:30  /  LOCATION: Capri III

Presenter:
Madhusanka Liyanage (University College Dublin, Ireland)

Boigraphy:
Madhusanka Liyanage, IEEE Senior Member (2020),  is an Assistant Professor/Ad Astra Fellow and Director of Graduate Research at the School of Computer Science, University College Dublin, Ireland. He is also acting as a Docent/Adjunct Professor at the Center for Wireless Communications, University of Oulu, Finland, and  Honorary Adjunct Professor at the Department of Electrical and Information Engineering, University of Ruhuna, Sri Lanka. He received his Doctor of Technology degree in communication engineering from the University of Oulu, Oulu, Finland, in 2016. He was also a recipient of the prestigious Marie Skłodowska-Curie Actions Individual Fellowship and Government of Ireland Postdoctoral Fellowship during 2018-2020. He has been a Visiting Research Fellow at the CSIRO, Australia, Lancaster University, U.K., The University of New South Wales, Australia, University of Sydney, Australia, Sorbonne University, France, IIT-Roorkee, India and The University of Oxford, U.K. He is also a senior member of IEEE. In 2020, he received the "2020 IEEE ComSoc Outstanding Young Researcher" award by IEEE ComSoc EMEA. In 2021, he was ranked among the World's Top 2\% Scientists (2020) in the List prepared by Elsevier BV, Stanford University, USA. Also, he was awarded an Irish Research Council (IRC) Research Ally Prize as part of the IRC Researcher of the Year 2021 awards for the positive impact he has made as a supervisor.  Dr. Liyanage's research interests are 5G/6G, SDN, IoT, Blockchain, MEC, mobile, and virtual network security. 

Abstract:
Open RAN (O-RAN) is novel industry-level standards for RAN (Radio Access Network) which defines interfaces that support inter-operation between vendors' equipment and offer network flexibility at a lower cost. Open RAN integrates the benefits and advancement of network softwarization and Artificial Intelligent to enhance the operation of RAN devices and operations. Open RAN offers new possibilities so that different stakeholders can develop the RAN solution in this open eco system.  However, the benefits of ORAN come at challenge new security and privacy challenges. As Open RAN offers a completely different RAN configuration than what exists today, and it could lead to serious security and privacy issues if mismanaged and stakeholders are understandably taking a cautious approach towards the secure Open RAN deployment. In particular, the tutorial will provide an deep analysis of  the security and privacy risks and challenges associated with Open RAN architecture. Then, we will discuss the  possible security and privacy solutions to secure Open RAN architecture and presents relevant security standardization efforts relevant to Open RAN security. The tutorial will also discuss how Open RAN can be used to deploy more advance security and privacy solutions in 5G and beyond RAN.   Finally, the tutorial will provide enlightening guidance for subsequent research of Open RAN security and privacy at this initial phase of vision towards reality.


TU-28: Post-Deep Learning Era: Emerging Quantum Machine Learning for Sensing and Communications

VIRTUAL

Presenter:
Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories (MERL), USA); Pu (Perry) Wang (Mitsubishi Electric Research Laboratories (MERL), USA)

Boigraphy:
Toshiaki Koike-Akino (Senior Member, IEEE) received the B.S. degree in electrical and electronics engineering, the M.S. and Ph.D. degrees in communications and computer engineering from Kyoto University, Kyoto, Japan, in 2002, 2003, and 2005, respectively. During 2006–2010, he was a Postdoctoral Researcher with Harvard University, Cambridge, MA, USA, and joined MERL, Cambridge, MA, USA, in 2010. His research interests include signal processing for data communications and sensing. He was the recipient of the YRP Encouragement Award 2005, the 21st TELECOM System Technology Award, the 2008 Ericsson Young Scientist Award, the IEEE GLOBECOM’08 Best Paper Award in Wireless Communications Symposium, the 24th TELECOM System Technology Encouragement Award, and the IEEE GLOBECOM’09 Best Paper Award in Wireless Communications Symposium. He is a Fellow of Optica (formerly OSA).

Pu (Perry) Wang (Member, IEEE) received Ph.D. degree from Stevens Institute of Technology in 2011. He was an intern at the Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, in the summer of 2010. Before returning to MERL, he was a Research Scientist in the Mathematics and Modeling Department of the Schlumberger-Doll Research, Cambridge, MA, contributing to developments of logging-while-drilling Acoustics/NMR products. His current research interests include signal processing, Bayesian inference, statistical learning, and their applications to mmWave/THz/Wi-Fi sensing, wireless communications, networks and automotive applications. He received the IEEE Jack Neubauer Memorial Award from IEEE Vehicular Technology Society in 2013 for the paper “Multiantenna-assisted spectrum sensing for cognitive radio”. He is currently an Associate Editor for IEEE Signal Processing Letters and an Associate Member of IEEE SPS Sensor Array and Multichannel (SAM) Technical Committee.
 

Abstract:
Recent millimeter wave (mmWave) and massive multiple-input multiple-output (MIMO) technologies used in the fifth generation and beyond (B5G) can achieve high resolution in both time and angular domain, bringing the “integrated sensing and communications (ISAC)” a viable concept. In particular, Wi-Fi-based human monitoring has received much attention due to the decreasing cost and less privacy concerns compared with camera-based approaches. Modern deep neural networks (DNNs) have made Wi-Fi-band signals useful for user identification, emotion sensing, and skeleton tracking. This tutorial reviews the trends, solutions, and limits of DNN for ISAC to discuss the potential future “post-DNN era”, by introducing an emerging quantum machine learning (QML) in step-by-step demonstrations. QML is considered as a key driver in the sixth generation (6G) applications, while there is few research yet to tackle practical problems. Quantum computers have the potential to realize computationally efficient signal processing compared to traditional digital computers by exploiting quantum mechanism in terms of not only execution time but also energy consumption. We introduce the emerging QML framework into ISAC applications, envisioning future era of quantum supremacy. QML is inherently suited for Wi-Fi sensing because a cloud quantum computing such as IBM Quantum and Amazon Braket is readily accessible over the network. The objective of this tutorial is three-fold: 1) to provide the recent advancement of ISAC and Wi-Fi sensing; 2) to introduce the recent progress of emerging QML framework as a post-DNN evolution; and 3) to demonstrate how the QML is used in practice for sensing and communications research. Considering the rapid growth of quantum technologies, we believe that it is a good time to discuss QML as a potential post-DNN era.


TU-29: Meta Learning for Future Wireless Networks: Basics and Applications

VIRTUAL

Presenter:
Di Wu (Samsung AI Center Montreal); Ekram Hossain (Samsung AI Center Montreal); Xue Liu (Samsung AI Center Montreal)

Boigraphy:
Di Wu is currently a research scientist at Samsung AI center Montreal. At Samsung, he is mainly working on AI for Communications Systems. Before joining Samsung, he did postdoctoral research at Montreal MILA and Stanford University. He received the Ph.D. degree from McGill University, Montreal, Canada, in 2018 and the M.S degree from Peking University, Beijing, China, in 2013.  Di’s research interests mainly lie in reinforcement learning and data-efficient machine learning algorithms (e.g., transfer learning, meta-learning, and multitask learning). He is also interested in leveraging such algorithms for applications in real-world systems (e.g., smart grid, communication systems, and intelligent transportation systems).

Ekram Hossain (F’15) is Professor in the Department of Electrical and Computer Engineering at University of Manitoba, Canada (https://home.cc.umanitoba.ca/~hossaina/). He is a Member (Class of 2016) of the College of the Royal Society of Canada, a Fellow of the Canadian Academy of Engineering, and a Fellow of the Engineering Institute of Canada. He was elevated to an IEEE Fellow ``for contributions to spectrum management and resource allocation in cognitive and cellular radio networks”. He was listed as a Clarivate Analytics Highly Cited Researcher in Computer Science in 2017, 2018, 2019, 2020, and 2021. He received the 2017 IEEE Communications Society (ComSoc) TCGCC (Technical Committee on Green Communications & Computing) Distinguished Technical Achievement Recognition Award ``for outstanding technical leadership and achievement in green wireless communications and networking”. He served as the Editor-in-Chief of IEEE Press (2018-2021), the ComSoc Director of Magazines (2020-2021), and the Editor-in-Chief of the IEEE Communications Surveys and Tutorials (2012--2016). He was an elected Member of the Board of Governors of the IEEE ComSoc (2018-2020). He served as the Technical Program Committee Chair for the IEEE International Conference on Communications 2022 (ICC'22). Currently, he serves as an Editor of the IEEE Transactions on Mobile Computing and the Director of Online Content (2022-2023) for the IEEE Communications Society.

Xue Liu (S'02, M’07, SM’19, F’20) is a Professor and William Dawson Scholar at McGill University. He is also a VP R\&D, Chief Scientist, and Co-Director of the Samsung AI Center Montreal. He received his Ph.D. with multiple honors from the University of Illinois at Urbana-Champaign. He has also worked as the Samuel R. Thompson Chaired Associate Professor in the University of Nebraska-Lincoln, as a visiting scientist at HP Labs in Palo Alto, California, and as the Chief Scientist of Tinder Inc. His current research interests include AI and machine learning, computers and communications systems, 5G/6G technologies, CPS and Internet-of-Things. Dr. Liu has published over 300 highly cited journal and conference papers.
Dr. Liu is a Fellow of the Canadian Academy of Engineering, a Fellow of the IEEE.  He has received many awards and recognitions, including Mitacs Award for Exceptional Leadership – Professor, Outstanding Young Canadian Computer Science Researcher Prizes, the Tomlinson Scientist Award, and several IEEE or ACM Best Paper Awards. He serves/has served as an Editor/Associate Editor for ACM Transactions on Cyber-Physical Systems, IEEE/ACM Transactions on Networking, IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Vehicular Technology, and IEEE Communications Surveys and Tutorials. He has served on the organizing committees of many IEEE or ACM conferences, including INFOCOM, IWQos, CPS-IoT Week, ICCPS, e-Energy, RTSS, RTAS, SenSys, etc.

Abstract:
Machine learning has been recognized as a key ingredient for modern communications systems. However, most of the current machine learning methods assume the availability of a large amount of data which can be a major challenge for many real-world applications. For example, in communications systems, it may be very difficult to collect a large amount of training data for certain applications. Also, the distributions in the real field may also drift from time to time which requires online model learning with newly collected data. Meta learning is, also referred to as learning to learn, offers a potential solution for this challenge. It aims to learn models which can quickly adapt to new tasks with a few samples and thus can achieve better performance in a non-stationary environment. This tutorial will provide a friendly introduction to different meta learning techniques as well as their applications to the design and optimization of wireless communications networks. After presenting the fundamentals of meta learning, we will discuss the motivation for using meta learning for the evolving future cellular networks (e.g., 5G and beyond 5G [B5G] cellular networks). In particular, three types of meta learning including: 1) gradient-based methods, 2) metric-based methods, and 3) memory-based methods, will be introduced from concepts to mathematical formulations and some classical real-world applications. Then, applications of meta learning techniques to different ``wireless" problems, including resource allocation, edge computing, channel estimation, as well as capacity optimization, will be discussed, and the current state-of-the-art will be reviewed. Finally, the current trends, open research challenges, and future research directions on using meta learning in wireless networks will be discussed.


TU-30: Realizing the Metaverse with Edge Intelligence: A Tutorial

VIRTUAL

Presenter:
Dusit Niyato (Nanyang Technological University, Singapore); Zehui Xiong (Singapore University of Technology and Design); Wei Yang Bryan Lim (Nanyang Technological University, Singapore)

Boigraphy:
Dusit Niyato is currently a professor in the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He received B.E. from King Mongkuk’s Institute of Technology Ladkrabang (KMITL), Thailand in 1999 and Ph.D. in Electrical and Computer Engineering from the University of Manitoba, Canada in 2008. Dusit's research interests are in the areas of distributed collaborative machine learning, Internet of Things (IoT), edge intelligent metaverse, mobile and distributed computing, and wireless  networks. Dusit won the 2011 IEEE Communications Society Fred W. Ellersick Prize Paper Award and the IEEE Computer Society Middle Career Researcher Award for Excellence in Scalable Computing in 2021 and Distinguished Technical Achievement Recognition Award of IEEE ComSoc Technical Committee on Green Communications and Computing 2022. Dusit also won a number of best paper awards including IEEE Wireless Communications and Networking Conference (WCNC), IEEE International Conference on Communications (ICC), and IEEE ComSoc Signal Processing and Computing for Communications Technical Committee 2021. Currently, Dusit is serving as Editor-in-Chief of IEEE Communications Surveys and Tutorials, an area editor of IEEE Transactions on Vehicular Technology, editor of IEEE Transactions on Wireless Communications, associate editor of IEEE Internet of Things Journal, IEEE Transactions on Mobile Computing, IEEE Wireless Communications, IEEE Network, and ACM Computing Surveys. He was a guest editor of IEEE Journal on Selected Areas on Communications. He was a Distinguished Lecturer of the IEEE Communications Society for 2016-2017. He was named the 2017-2021 highly cited researcher in computer science. He is a Fellow of IEEE and a Fellow of IET.

Zehui Xiong is currently an Assistant Professor at Singapore University of Technology and Design, and also an Honorary Adjunct Senior Research Scientist with Alibaba-NTU Singapore Joint Research Institute, Singapore. He received the PhD degree in Nanyang Technological University (NTU), Singapore. He was the visiting scholar at Princeton University and University of Waterloo. His research interests include wireless communications, Internet of Things, blockchain, edge intelligence, and Metaverse. He has published more than 150 research papers in leading journals and flagship conferences and many of them are ESI Highly Cited Papers. He has won over 10 Best Paper Awards in international conferences and is listed in the World’s Top 2% Scientists identified by Stanford University. He is now serving as the editor or guest editor for many leading journals including IEEE JSAC, TVT, IoTJ, TCCN, TNSE, ISJ, JAS. He is the recipient of IEEE Early Career Researcher Award for Excellence in Scalable Computing, IEEE Technical Committee on Blockchain and Distributed Ledger Technologies Early Career Award, IEEE Internet Technical Committee Early Achievement Award, IEEE Best Land Transportation Paper Award, IEEE CSIM Technical Committee Best Journal Paper Award, IEEE SPCC Technical Committee Best Paper Award, IEEE VTS Singapore Best Paper Award, Chinese Government Award for Outstanding Students Abroad, and NTU SCSE Best PhD Thesis Runner-Up Award. He is now serving as the Associate Director of Future Communications R&D Programme.

Wei Yang Bryan Lim is currently Wallenberg-NTU Presidential Postdoctoral Fellow. He received the PhD degree in Nanyang Technological University (NTU), Singapore, in 2022 under the Alibaba PhD Talent Programme where he won the “Most Promising PhD Student Award” for the industrial PhD programme. He has also won Best Paper Awards including in the IEEE Wireless Communications and Networking Conference (WCNC) and IEEE SPCC Technical Committee Best Paper Award. He regularly serves as a reviewer in leading journals and TPC in flagship conferences and is currently the journal assistant to the Editor-in-Chief of IEEE Communications Surveys & Tutorials and review board member of IEEE Transactions on Parallel and Distributed Systems.

Abstract:
To date, tech giants have invested heavily towards realizing the Metaverse as ``the successor to the mobile Internet". In 2021, Facebook was rebranded as ``Meta" as it reinvents itself to be a ``Metaverse company" from a ``social media" company. Furthermore, Government bodies around the world have announced their interests in establishing a presence in the Metaverse. However, the development of the Metaverse is still in its infancy. The stringent sensing, communication, and computation requirements impede the real-time, scalable, and ubiquitous implementation of the Metaverse. In this tutorial, we begin by presenting to the audience the current progress in the development of the Metaverse. Then, we motivate the definition and introduction to the architecture of the Metaverse, as well as highlight upcoming trends and novel applications from the industry and academia. To realize the Metaverse amid its unique challenges, we mainly focus on the edge intelligence driven infrastructure layer, which is a core feature in future wireless networks. In short, edge intelligence is the convergence between edge computing and AI. We adopt the two commonly-quoted divisions of edge intelligence, i.e., i) Edge for AI: which refers to the end-to-end framework of bringing sensing, communication, AI model training, and inference closer to  where data is produced, and ii) AI for Edge: which refers to the use of AI algorithms to improve the orchestration of the aforementioned framework. Then, as a case study, we present a framework for the collaborative edge-driven virtual city development in the Metaverse. Finally, we discuss the open research issues.


TU-31: Localization-of-Things in Beyond 5G Ecosystem

VIRTUAL

Presenter:
Moe Win  (Laboratory for Information and Decision Systems - MIT); Andrea Conti (University of Ferrara)

Boigraphy:
Moe Win is a Professor at the Massachusetts Institute of Technology (MIT). Prior to joining MIT, he was at AT&T Research Laboratories for five years and at the Jet Propulsion Laboratory for seven years. His research encompasses fundamental theories, algorithm design, and network experimentation for a broad range of real-world problems. His current research topics include network localization and navigation, network interference exploitation, and quantum information science. Professor Win has served the IEEE Communications Society as an elected Member-at-Large on the Board of Governors, as elected Chair of the Radio Communications Committee, and as an IEEE Distinguished Lecturer. Over the last two decades, he held various Editorial posts for IEEE journals and organized numerous international conferences. Currently, he is serving on the SIAM Diversity Advisory Committee. He was honored with two IEEE Technical Field Awards: the IEEE Kiyo Tomiyasu Award and the IEEE Eric E. Sumner Award. Other recognitions include the MIT Everett Moore Baker Award, the IEEE Vehicular Technology Society James Evans Avant Garde Award, the IEEE Communications Society Edwin H. Armstrong Achievement Award, the Cristoforo Colombo International Prize for Communications, the Copernicus Fellowship and the Laurea Honoris Causa from the University of Ferrara, and the U.S. Presidential Early Career Award for Scientists and Engineers. Professor Win is elected Fellow of the AAAS, the EURASIP, the IEEE, and the IET. He is an ISI Highly Cited Researcher.

Andrea Conti is a Professor at the University of Ferrara and Research Affiliate at the MIT Wireless Information and Network Sciences Laboratory. His research interests involve theory and experimentation of wireless systems and networks including network localization, distributed sensing, and quantum information science. He received the HTE Puskás Tivadar Medal, the IEEE Communications Society’s Stephen O. Rice Prize in the field of Communications Theory, and the IEEE Communications Society’s Fred W. Ellersick Prize. Dr. Conti has served as editor for IEEE journals, as well as chaired international conferences. He has been elected Chair of the IEEE Communications Society’s Radio Communications Technical Committee. He is a co-founder and elected Secretary of the IEEE Quantum Communications & Information Technology Emerging Technical Subcommittee. Professor Conti is an elected Fellow of the IEEE and the IET, and he has been selected as an IEEE Distinguished Lecturer.

Abstract:
The availability of real-time high-accuracy location awareness is essential for current and future wireless applications, particularly those involving Internet-of-Things and beyond 5G ecosystem. Reliable localization and navigation of people, objects, and vehicles – Localization-of-Things – is a critical component for a diverse set of applications including connected communities, smart environments, vehicle autonomy, asset tracking, medical services, military systems, and crowd sensing. The coming years will see the emergence of network localization and navigation in challenging environments with sub-meter accuracy and minimal infrastructure requirements.

We will discuss the limitations of traditional positioning, and move on to the key enablers for high-accuracy location awareness: wideband transmission and cooperative processing. Topics covered will include: fundamental bounds, cooperative algorithms, and network experimentation. Fundamental bounds serve as performance benchmarks, and as a tool for network design. Cooperative algorithms are a way to achieve dramatic performance improvements compared to traditional non-cooperative positioning. To harness these benefits, system designers must consider realistic operational settings; thus, we present the performance of cooperative localization based on measurement campaigns. We will also present LoT enablers, including reconfigurable intelligent surfaces, which promise to provide a dramatic gain in terms of localization accuracy and system robustness in next generation networks


TU-32: Wireless for Machine Learning

VIRTUAL

Presenter:
Carlo Fischione (Institute of Technology, Sweden); Viktoria Fodor (Institute of Technology, Sweden); José Mairton B. da Silva Jr. (Institute of Technology, Sweden); Henrik Hellström (Institute of Technology, Sweden)

Boigraphy:
Carlo Fischione is Professor at KTH Royal Institute of Technology, Sweden. He is chair of the IEEE ComSoc ML for Communication Emerging Technology Initiative, and director of the “Data Science” Micro Degree Program of KTH Royal Institute of Technology, Sweden, an advanced study program to upskill industrial researchers worldwide on data science for telecommunication. He received the Ph.D. degree in Electrical and Information Engineering in 2005 and the Laurea degree in Electronic Engineering (Summa cum Laude) in 2001 from University of L’Aquila, Italy. He has had faculty positions at the University of California at Berkley, MIT Massachusetts Institute of Technology, and Harvard University. He was recipient of numerous awards, including the Best Paper Awards from the IEEE Transactions on Communications (2018), the IEEE Transactions on Industrial Informatics (2007), and several Best Paper Awards at IEEE conferences. He has co-authored over 200 publications, including book, book chapters, journals, conferences, and patents. He has offered consultancy to numerous technology companies such as ABB Corporate Research, Berkeley Wireless Sensor Network Lab, Ericsson Research, Synopsys, and United Technology Research Center. His research interests include optimization with applications to wireless networks, Internet of Things, and machine learning. He is Editor of the IEEE Transactions on Communications (Machine Learning area) and the IEEE Journal on Selected Areas in Communications series ML in Communications and Networking.

Viktoria Fodor is Professor of Communication Networks at KTH Royal Institute of Technology, Sweden. She received the M.Sc. and Ph.D. degrees from the Budapest University of Technology and Economics, Budapest, Hungary, in 1992 and 1999, respectively, both in computer engineering. She received habilitation qualification (docent) from KTH in 2011. In 1998, she was a senior researcher with the Hungarian Telecommunication Company. Since 1999, she has been with KTH. Viktoria Fodor’s current research interests include performance evaluation of networks and distributed systems, stochastic modeling and protocol design, with focus on edge computing and ML over networks. She has published more than hundred scientific publications, is associate editor of IEEE Transactions of Network and Service Management and Wiley Transactions on Emerging Telecommunications Technologies and area chair of IEEE Infocom 2019.

José Mairton B. da Silva Jr. is currently a Marie Skłodowska-Curie Postdoctoral Fellow at Princeton University in the Department of Electrical and Computer Engineering with Prof. H. Vincent Poor, and at KTH Royal Institute of Technology in the Division of Network and Systems Engineering with Prof. Carlo Fischione. He received the Ph.D. degree in Electrical Engineering and Computer Science from KTH Royal Institute of Technology, Stockholm, Sweden, in 2019 under the supervision of Prof. Carlo Fischione and Prof. Gábor Fodor. He received his BSc (with honors) and MSc degree in Telecommunications Engineering from the Federal University of Ceará in 2012 and 2014, respectively. He worked as a research engineer at the Wireless Telecommunication Research Group (GTEL) in Fortaleza from July 2012 to March 2015. During the autumn/winter of 2013-2014, he worked in an internship at Ericsson Research under the supervision of Prof. Gábor Fodor in Stockholm, Sweden. During Spring/Fall 2018, he was a visiting researcher at Rice University under the supervision of Prof. Ashutosh Sabharwal in Houston, Texas, USA. Mairton was a Postdoctoral Researcher with the KTH Royal Institute of Technology from April 2019 to April 2022. During the Spring 2022, he were a Visiting Researcher Fellow with Princeton University, USA.

Henrik Hellström earned his M.Sc. degree in information and network engineering from the KTH Royal Institute of Technology, Stockholm, Sweden, in 2019. As a master’s degree student, he worked at the ABB Corporate Research Center in Västerås, Sweden, and continued as a research engineer following the completion of his master’s thesis. Currently, he is pursuing his Ph.D. degree in information and communication technology at the KTH Royal Institute of Technology. He is currently the secretary for the IEEE Emerging Technology Initiative on Machine Learning for Communications. His research interests include over-the-air computation, intelligent edge networks, distributed machine learning, the internet of things, and industrial wireless communication networks.

Abstract:
In view of emerging applications from autonomous driving to health monitoring, it is very likely that a large part of machine learning (ML) services in the near future will take place over wireless networks, and conversely, a large part of wirelessly transmitted information will be related to ML. As data generation increasingly takes place on devices without a wired connection, ML over wireless networks becomes critical. Many studies have shown that traditional wireless protocols are highly inefficient or unsustainable to support distributed ML services. This is creating the need for new wireless communication methods, specifically on the medium access control and physical layers, that will be arguably included in 6G. In this tutorial, we plan to give a comprehensive review of the state-of-the-art wireless methods that are specifically designed to support ML services. Namely, over-the-air computation and digital radio resource management (RRM) optimized for ML. In the over-the-air approach, multiple devices communicate simultaneously over the same time slot and frequency band to leverage the superposition property of wireless channels for gradient averaging over-the-air. In RRM optimized for ML, the objective of communication pivots from the efficient and reliable reconstruction of data to ML metrics such as maximizing the classification accuracy of an ML model. This tutorial introduces these methods, reviews the most important works, and highlights crucial open problems.

Platinum Patrons

Gold Patrons

Bronze Patrons

Supporting Sponsors

Exhibitors