By the same authors

Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

Department/unit(s)

Publication details

Title of host publicationProceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020)
DatePublished - 27 Feb 2020
Pages23-30
PublisherCEUR Workshop Proceedings
Original languageEnglish

Publication series

NameCEUR Workshop Proceedings
Volume2560
ISSN (Electronic)1613-0073

Abstract

Machine Learnt (ML) components are now widely accepted for use in a range of applications with results that are reported to exceed, under certain conditions, human performance. The adoption of ML components in safety-related domains is restricted, however, unless sufficient assurance can be demonstrated that the use of these components does not compromise safety. In this paper, we present patterns that can be used to develop assurance arguments for demonstrating the safety of the ML components. The argument patterns provide reusable templates for the types of claims that must be made in a compelling argument. On their own, the patterns neither detail the assurance artefacts that must be generated to support the safety claims for a particular system, nor provide guidance on the activities that are required to generate these artefacts. We have therefore also developed a process for the engineering of ML components in which the assurance evidence can be generated at each stage in the ML lifecycle in order to instantiate the argument patterns and create the assurance case for ML components. The patterns and the process could help provide a practical and clear basis for a justifiable deployment of ML components in safety-related systems.

Bibliographical note

© 2020 for this paper by its authors.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations