Confidence Arguments for Evidence of Performance in Machine Learning for Highly Automated Driving Functions

Simon Burton, Lydia Gauerhof, Richard David Hawkins, Ibrahim Habli, Bibhuti Sethy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Due to their ability to efficiently process unstructured and highly dimensional input data, machine learning algorithms are being applied to perception tasks for highly automated driving functions. The consequences of failures and insu_ciencies in such algorithms are severe and a convincing assurance case that the algorithms meet certain safety requirements is therefore required. However, the task of demonstrating the performance of such algorithms is non-trivial, and as yet, no consensus has formed regarding an appropriate set of verification measures. This paper provides a framework for reasoning about the contribution of performance evidence to the assurance case for machine learning in an automated driving context and applies the evaluation criteria to a pedestrian recognition case study.
Original languageEnglish
Title of host publicationSafecomp 2019 - Workshop on Artificial Intelligence Safety Engineering (Waise) of the 38th International Conference on Computer Safety, Reliability and Security
Publication statusAccepted/In press - 6 Jun 2019

Cite this