Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review.
|Number of pages
|Artificial Intelligence Review
|Published - Oct 2004
Bibliographical noteCopyright © 2004 Kluwer Academic Publishers. This is an author produced version of a paper published in Artificial Intelligence Review. This paper has been peer-reviewed but does not include the final publisher proof-corrections or journal pagination.The original publication is available at www.springerlink.com.
- NOVELTY DETECTION