By the same authors

Establishing safety criteria for artificial neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

Department/unit(s)

Publication details

Title of host publicationKNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 1, PROCEEDINGS
DatePublished - 2003
Pages163-169
Number of pages7
PublisherSPRINGER-VERLAG BERLIN
Place of PublicationBERLIN
EditorsV Palade, RJ Howlett, L Jain
Original languageEnglish
ISBN (Print)3-540-40803-7

Abstract

Artificial neural networks are employed in many areas of industry such as medicine and defence. There are many techniques that aim to improve the performance of neural networks for safety-critical systems. However, there is a complete. absence of analytical certification methods for neural network paradigms. Consequently, their role in safety-critical applications, if any, is typically restricted to advisory systems. It is therefore desirable to enable neural networks for highly-dependable roles. This paper defines the safety criteria which if enforced, would contribute to justifying the safety of neural networks. The criteria are a set of safety requirements for the behaviour of neural networks. The paper also highlights the challenge of maintaining performance in terms of adaptability and generalisation whilst providing acceptable safety arguments.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations