Mind the Gaps: Assuring the Safety of Autonomous Systems from an Engineering, Ethical, and Legal Perspective

Research output: Contribution to journalArticlepeer-review


This paper brings together a multi-disciplinary perspective from systems engineering, ethics, and law to articulate a common language in which to reason about the multi-faceted problem of assuring the safety of autonomous systems. The paper's focus is on the “gaps” that arise across the development process: the semantic gap, where normal conditions for a complete specification of intended functionality are not present; the responsibility gap, where normal conditions for holding human actors morally responsible for harm are not present; and the liability gap, where normal conditions for securing compensation to victims of harm are not present. By categorising these “gaps” we can expose with greater precision key sources of uncertainty and risk with autonomous systems. This can inform the development of more detailed models of safety assurance and contribute to more effective risk control.
Original languageEnglish
Article number103201
JournalArtificial Intelligence
Early online date9 Nov 2019
Publication statusE-pub ahead of print - 9 Nov 2019

Bibliographical note

© 2019 Elsevier B.V. All rights reserved. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy.


  • safety
  • Autonomous systems
  • Artificial Intelligence
  • LAW

Cite this