Specifying Safety Requirements for Machine Learning Components in Autonomous Systems: A Survey

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Machine learning (ML) components are recognized for their potential to undertake tasks such as object detection and classification across a range of safety related applications. In order to be used safely, it is crucial that safety requirements for the ML components are correctly understood, specified in a manner that supports ML development, and can be demonstrated to be sufficient and valid. Traditional safety requirements approaches may not apply well to ML, due to their data-driven nature especially in complex environments. Defining safety requirements for ML components will require an understanding of the unique mechanisms by which ML can contribute to system safety and the potential failure modes of ML components. So far, little work has been done that attempts to systematically derive safety requirements specific to ML components and to ensure a traceable link be-tween system-level and component-level safety requirements. This work aims to address this gap by providing a comprehensive survey of existing literature on meth-ods for the elicitation of safety requirements for ML components. We identify key challenges and limitations in current methods and propose possible solutions. This paper highlights these issues and reviews current research to lay a foundation for developing robust and effective safety requirements for ML components.
Original languageEnglish
Title of host publicationSafety Critical Systems Symposium (SSS '25)
Publication statusPublished - 6 Feb 2025

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy.

Cite this