TY - JOUR
T1 - Explainable Reinforcement and Causal Learning for Improving Trust to 6G Stakeholders
AU - Arana-Catania, Miguel
AU - Sonee, Amir
AU - Khan, Abdul Manan
AU - Fatehi, Kavan
AU - Tang, Yun
AU - Jin, Bailu
AU - Soligo, Anna
AU - Boyle, David
AU - Calinescu, Radu
AU - Yadav, Poonam
AU - Ahmadi, Hamed
AU - Tsourdos, Antonios
AU - Guo, Weisi
AU - Russo, Alessandra
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2025/4/22
Y1 - 2025/4/22
N2 - Future telecommunications will increasingly integrate AI capabilities into network infrastructures to deliver seamless and harmonized services closer to end-users. However, this progress also raises significant trust and safety concerns. The machine learning systems orchestrating these advanced services will widely rely on deep reinforcement learning (DRL) to process multi-modal requirements datasets and make semantically modulated decisions, introducing three major challenges: (1) First, we acknowledge that most explainable AI research is stakeholder-agnostic while, in reality, the explanations must cater for diverse telecommunications stakeholders, including network service providers, legal authorities, and end users, each with unique goals and operational practices; (2) Second, DRL lacks prior models or established frameworks to guide the creation of meaningful long-term explanations of the agent's behaviour in a goal-oriented RL task, and we introduce state-of-the-art approaches such as reward machine and sub-goal automata that can be universally represented and easily manipulated by logic programs and verifiably learned by inductive logic programming of answer set programs; (3) Third, most explainability approaches focus on correlation rather than causation, and we emphasise that understanding causal learning can further enhance 6G network optimisation. Together, in our judgement they form crucial enabling technologies for trustworthy services in 6G. This review offers a timely resource for academic researchers and industry practitioners by highlighting the methodological advancements needed for explainable DRL (X-DRL) in 6G. It identifies key stakeholder groups, maps their needs to X-DRL solutions, and presents case studies showcasing practical applications. By identifying and analysing these challenges in the context of 6G case studies, this work aims to inform future research, transform industry practices, and highlight unresolved gaps in this rapidly evolving field.
AB - Future telecommunications will increasingly integrate AI capabilities into network infrastructures to deliver seamless and harmonized services closer to end-users. However, this progress also raises significant trust and safety concerns. The machine learning systems orchestrating these advanced services will widely rely on deep reinforcement learning (DRL) to process multi-modal requirements datasets and make semantically modulated decisions, introducing three major challenges: (1) First, we acknowledge that most explainable AI research is stakeholder-agnostic while, in reality, the explanations must cater for diverse telecommunications stakeholders, including network service providers, legal authorities, and end users, each with unique goals and operational practices; (2) Second, DRL lacks prior models or established frameworks to guide the creation of meaningful long-term explanations of the agent's behaviour in a goal-oriented RL task, and we introduce state-of-the-art approaches such as reward machine and sub-goal automata that can be universally represented and easily manipulated by logic programs and verifiably learned by inductive logic programming of answer set programs; (3) Third, most explainability approaches focus on correlation rather than causation, and we emphasise that understanding causal learning can further enhance 6G network optimisation. Together, in our judgement they form crucial enabling technologies for trustworthy services in 6G. This review offers a timely resource for academic researchers and industry practitioners by highlighting the methodological advancements needed for explainable DRL (X-DRL) in 6G. It identifies key stakeholder groups, maps their needs to X-DRL solutions, and presents case studies showcasing practical applications. By identifying and analysing these challenges in the context of 6G case studies, this work aims to inform future research, transform industry practices, and highlight unresolved gaps in this rapidly evolving field.
KW - 6G
KW - causal learning
KW - explainable AI
KW - reinforcement learning
KW - stakeholders
KW - trust
UR - http://www.scopus.com/inward/record.url?scp=105003489540&partnerID=8YFLogxK
U2 - 10.1109/OJCOMS.2025.3563415
DO - 10.1109/OJCOMS.2025.3563415
M3 - Review article
AN - SCOPUS:105003489540
SN - 2644-125X
VL - 6
SP - 4101
EP - 4125
JO - IEEE Open Journal of the Communications Society
JF - IEEE Open Journal of the Communications Society
ER -