Comparing the comprehensiveness of three expert inspection methodologies for detecting errors in interactive systems

Research output: Contribution to journalArticlepeer-review


Abstract Expert inspection methods provide a means of evaluating interactive systems for error throughout the design lifecycle. Experts have a wide variety of methods available to them for detecting either potential user errors or usability problems that will provoke error; however, the data on what types of errors are detected by each method is very thin. This paper, presents the results of a study into the comprehensiveness of three expert inspection methods that were applied by nine evaluators across three devices. This study produced 350 errors that were analysed to identify, compare and contrast what types of errors were detected by each method. Of particular interest, the investigation revealed that a substantial number of distinct errors were detected by only one method. The paper closes with a discussion of the implications of these results on future practice for multi-method approaches as well as directions for future investigations.
Original languageEnglish
Pages (from-to)286-294
Number of pages9
JournalSafety science
Early online date23 Oct 2013
Publication statusPublished - 2013


  • Expert inspection methods
  • Error inspection methods
  • Usability inspection methods
  • Error types
  • Behavioural deviation
  • Cognitive phase
  • Cognitive Walkthrough

Cite this