What's my role? Modelling responsibility for AI-based safety-critical systems

Research output: Working paper


AI-Based Safety-Critical Systems (AI-SCS) are being increasingly developed and deployed in the real world. These can pose a risk of harm to people and the environment, hence reducing that risk is an overarching priority during development and operation. Many contain Machine Learning (ML) components whose performance is hard to predict. As more AI systems become autonomous, a layer of risk management via human intervention has been removed.
Following an accident it will be important to identify causal contributions and the different responsible actors behind those to learn from mistakes and prevent similar future events. Many authors have commented on the "responsibility gap" where it is difficult for developers and manufacturers to be held responsible for behaviour of an AI-SCS which contributes to harm. This is due to the complex development cycle for AI, the uncertainty in black-box AI components performance, and dynamic operating environment. Instead, a human operator of the AI-SCS can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating, and may not have sufficient understanding of.

This cross-disciplinary paper considers different types of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety. We use a core conceptual formulation \textit{Actor(A) is responsible for Occurrence(O)} to create detailed role responsibility models, including related tasks and resources. Our aim is to present a practical method to precisely capture responsibility relationships, and provide clarity on the previously identified responsibility issues. We propose an analysis method to review the models for safety impacts. Our paper demonstrates the utility of the approach with two practical examples. The first is a retrospective analysis of the Tempe Arizona fatal collision involving an autonomous vehicle. The second is a safety focused predictive role-responsibility analysis for an AI-based diabetes co-morbidity prediction system. We show how our notation and analysis can illuminate responsibility issues during the AI development process and identify the impact of uncertainty surrounding how tasks were performed. For both examples, our focus was primarily on safety, with an aim of reducing unfair or disproportionate blame being placed on operators or developers of AI-SCS. We present a discussion and avenues for future research, including the development of a richer causal contribution model.
Original languageEnglish
Number of pages22
Publication statusPublished - 2023

Cite this