Explainable Reinforcement and Causal Learning for Improving Trust to 6G Stakeholders

Miguel Arana-Catania, Amir Sonee, Abdul Manan Khan, Kavan Fatehi, Yun Tang, Bailu Jin, Anna Soligo, David Boyle, Radu Calinescu, Poonam Yadav, Hamed Ahmadi, Antonios Tsourdos, Weisi Guo*, Alessandra Russo

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

Future telecommunications will increasingly integrate AI capabilities into network infrastructures to deliver seamless and harmonized services closer to end-users. However, this progress also raises significant trust and safety concerns. The machine learning systems orchestrating these advanced services will widely rely on deep reinforcement learning (DRL) to process multi-modal requirements datasets and make semantically modulated decisions, introducing three major challenges: (1) First, we acknowledge that most explainable AI research is stakeholder-agnostic while, in reality, the explanations must cater for diverse telecommunications stakeholders, including network service providers, legal authorities, and end users, each with unique goals and operational practices; (2) Second, DRL lacks prior models or established frameworks to guide the creation of meaningful long-term explanations of the agent's behaviour in a goal-oriented RL task, and we introduce state-of-the-art approaches such as reward machine and sub-goal automata that can be universally represented and easily manipulated by logic programs and verifiably learned by inductive logic programming of answer set programs; (3) Third, most explainability approaches focus on correlation rather than causation, and we emphasise that understanding causal learning can further enhance 6G network optimisation. Together, in our judgement they form crucial enabling technologies for trustworthy services in 6G. This review offers a timely resource for academic researchers and industry practitioners by highlighting the methodological advancements needed for explainable DRL (X-DRL) in 6G. It identifies key stakeholder groups, maps their needs to X-DRL solutions, and presents case studies showcasing practical applications. By identifying and analysing these challenges in the context of 6G case studies, this work aims to inform future research, transform industry practices, and highlight unresolved gaps in this rapidly evolving field.

Original languageEnglish
Pages (from-to)4101-4125
Number of pages25
JournalIEEE Open Journal of the Communications Society
Volume6
DOIs
Publication statusPublished - 22 Apr 2025

Bibliographical note

Publisher Copyright:
© 2020 IEEE.

Keywords

  • 6G
  • causal learning
  • explainable AI
  • reinforcement learning
  • stakeholders
  • trust

Cite this