The need for the human-centred explanation for ML-based clinical decision support systems

Yan Jia*, John McDermid, Nathan Hughes, Mark Sujan, Tom Lawton, Ibrahim Habli

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the "opaque" nature of machine learning algorithms. This has led to the development of explainable AI methods, which aim to provide insights into complex algorithms through explanations that are comprehensible to humans. However, many of the explanations currently available are technically focused and reflect what machine learning researchers believe constitutes a good explanation, rather than what users actually want. This paper highlights the need to develop human-centred explanations for machine learning-based clinical decision support systems, as clinicians who typically have limited knowledge of machine learning techniques are the users of these systems. The authors define the requirements for human-centred explanations, then briefly discuss the current state of available explainable AI methods, and finally analyse the gaps between human-centred explanations and current explainable AI methods. A clinical use case is presented to demonstrate the vision for human-centred explanations.
Original languageEnglish
Title of host publicationProceedings - 2023 IEEE 11th International Conference on Healthcare Informatics, ICHI 2023
Number of pages7
ISBN (Electronic)9798350302639
Publication statusPublished - 2023
Event11th IEEE International Conference on Healthcare Informatics, ICHI 2023 - Houston, United States
Duration: 26 Jun 202329 Jun 2023

Publication series

NameProceedings - 2023 IEEE 11th International Conference on Healthcare Informatics, ICHI 2023


Conference11th IEEE International Conference on Healthcare Informatics, ICHI 2023
Country/TerritoryUnited States

Bibliographical note

Funding Information:
ACKNOWLEDGEMENT This project is funded by Lloyd’s Register Foundation and University of York through Assuring Autonomy International Programme (Project ref 06/22/04) and by the Engineering and Physical Sciences Research Council through the Assuring Responsibility for Trustworthy Autonomous Systems project (EP/W011239/1).

Publisher Copyright:
© 2023 IEEE.


  • CDSS
  • Explainable AI
  • Human-centred XAI

Cite this