Abstract
Every safety case should describe the deployment domain and environmental constraints within which the system is expected to operate. Recently many AI and autonomous system safety standards have proposed the use of a detailed formal description of the Operating Domain for autonomous safety critical systems. This OD is based on human understanding of variations and expected limitations, and is used to shape the data collection, testing, validation, verification and operational deployment of the system. For example, we assume an autonomous car will be driving on specific road layouts, with localised markings/signage, weather, and shared with a set of other defined road users. However, a Machine Learning (ML) components OD (e.g., that of a Deep Neural Network) is fundamentally different to ours, and is based on numerical data arrays. Our position is that over-reliance on a human-centred OD to shape V\&V will lead to false confidence, and safety issues being missed. Instead we propose effort is spent reverse engineering the ML's view of the world, to better understand the OD's gaps, areas of uncertainty and hence derive strategies for how to mitigate related hazards.
Original language | English |
---|---|
Publication status | Published - Sept 2024 |
Event | 43rd International Conference on Computer Safety, Reliability and Security - Florence, Italy Duration: 17 Sept 2024 → … |
Conference
Conference | 43rd International Conference on Computer Safety, Reliability and Security |
---|---|
Abbreviated title | SAFECOMP |
Country/Territory | Italy |
Period | 17/09/24 → … |