A Pattern for Arguing the Assurance of Machine Learning in Medical Diagnosis Systems

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Machine Learning offers the potential to revolutionise healthcare with recent work showing that machine-learned algorithms can achieve or exceed expert human performance. The adoption of such systems in the medical domain should not happen, however, unless sufficient assurance can be demonstrated. In this paper we consider the implicit assurance argument for state-of-the-art systems that uses machine-learnt models for clinical diagnosis, e.g. retinal disease diagnosis. Based upon an assessment of this implicit argument we identify a number of additional assurance considerations that would need to be addressed in order to create a compelling assurance case. We present an assurance case pattern that we have developed to explicitly address these assurance considerations. This pattern may also have the potential to be applied to a wide class of critical domains where ML is used in the decision making process.
Original languageEnglish
Title of host publication38th International Conference on Computer Safety, Reliability and Security – SafeComp 2019
Publication statusAccepted/In press - 30 Apr 2019

Cite this