The need for the human-centred explanation for ML-based clinical decision support systems

Yan Jia, John Alexander McDermid, Nathan Hughes, Mark-Alexander Sujan, Tom Lawton, Ibrahim Habli

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Machine learning has shown great promise in a variety of applications, but the deployment of these systems is hindered by the "opaque" nature of machine learning algorithms. This has led to the development of explainable AI methods, which aim to provide insights into complex algorithms through explanations that are comprehensible to humans. However, many of the explanations currently available are technically focused and reflect what machine learning researchers believe constitutes a good explanation, rather than what users actually want. This paper highlights the need to develop human-centred explanations for machine learning-based clinical decision support systems, as clinicians who typically have limited knowledge of machine learning techniques are the users of these systems. The authors define the requirements for human-centred explanations, then briefly discuss the current state of available explainable AI methods, and finally analyse the gaps between human-centred explanations and current explainable AI methods. A clinical use case is presented to demonstrate the vision for human-centred explanations.
Original languageEnglish
Title of host publication2023 IEEE 11th International Conference on Healthcare Informatics (ICHI)
Publication statusAccepted/In press - 2023

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

Cite this