Abstract
In dynamic environments, safety-critical autonomous systems must adapt to environmental changes without violating safety requirements. Model verification at runtime supports adaptation through the periodic analysis of continually updated models. A major limitation of the technique is the high overhead associated with the regular analyses of large state-space models. Our article introduces an adaptive approximation strategy that tackles this limitation by delaying unnecessary model updates, significantly reducing the overheads of these analyses. The strategy is applicable to Markov decision processes (MDPs) and is partitioned into components that can be analyzed independently and approximately. Each component is assigned a priority that depends on its impact on the accuracy of verification, and only the highest-priority components affected by changes are scheduled for updating/approximating. A complete update and verification of the entire model is triggered infrequently when the accuracy drops below a predefined threshold. We provide theoretical guarantees and proofs which ensure that our strategy can be applied without impacting the overall safety of the verified autonomous system. The experimental results from a case study in which we applied the strategy to a rescue robot team show that it is fully robust against safety-critical errors and can achieve a decision accuracy of over 97%.
Original language | English |
---|---|
Pages (from-to) | 1-45 |
Number of pages | 45 |
Journal | ACM Transactions on Autonomous and Adaptive Systems |
Volume | 20 |
Issue number | 1 |
DOIs | |
Publication status | Published - 18 Mar 2025 |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Keywords
- formal approximation
- Markov models
- runtime model verification
- safety-critical systems
- self-adaptive systems