Developing artificial neural networks for safety critical systems

Zeshan Kurd, Tim Kelly, Jim Austin

Research output: Contribution to journalArticlepeer-review


There are many performance based techniques that aim to improve the safety of neural networks for safety critical applications. However, many of these approaches provide inadequate forms of safety assurance required for certification. As a result, neural networks are typically restricted to advisory roles in safety-related applications. Neural networks have the ability to operate in unpredictable and changing environments. It is therefore desirable to certify them for highly-dependable roles in safety critical systems. This paper outlines the safety criteria which are safety requirements for the behaviour of neural networks. If enforced, the criteria can contribute to justifying the safety of ANN functional properties. Characteristics of potential neural network models are also outlined and are based upon representing knowledge in interpretable, and understandable forms. The paper also presents a safety lifecycle for artificial neural networks. This lifecycle focuses on managing behaviour represented by neural networks and contributes to providing acceptable forms of safety assurance.

Original languageEnglish
Pages (from-to)11-19
Number of pages9
JournalNeural computing & applications
Issue number1
Publication statusPublished - Jan 2007


  • safety critical
  • neural network
  • criteria
  • lifecycle
  • faults
  • hazards
  • symbolic knowledge

Cite this