Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Machine Learnt (ML) components are now widely accepted for use in a range of applications with results that are reported to exceed, under certain conditions, human performance. The adoption of ML components in safety-related domains is restricted, however, unless sufficient assurance can be demonstrated that the use of these components does not compromise safety. In this paper, we present patterns that can be used to develop assurance arguments for demonstrating the safety of the ML components. The argument patterns provide reusable templates for the types of claims that must be made in a compelling argument. On their own, the patterns neither detail the assurance artefacts that must be generated to support the safety claims for a particular system, nor provide guidance on the activities that are required to generate these artefacts. We have therefore also developed a process for the engineering of ML components in which the assurance evidence can be generated at each stage in the ML lifecycle in order to instantiate the argument patterns and create the assurance case for ML components. The patterns and the process could help provide a practical and clear basis for a justifiable deployment of ML components in safety-related systems.
Original languageEnglish
Title of host publicationProceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020)
PublisherCEUR Workshop Proceedings
Publication statusPublished - 27 Feb 2020

Publication series

NameCEUR Workshop Proceedings
ISSN (Electronic)1613-0073

Bibliographical note

© 2020 for this paper by its authors.

Cite this