Abstract
While z-scores provide participants with easy-to-interpret scores for quantitative proficiency tests, there is no universally accepted equivalent scoring method available for qualitative testing. Under the assumption that these tests follow a binomial distribution, it is possible to calculate scores that mimic the widely used z-scores and provide participants with insight into their performance level. We show that these scores, which we term a-scores, can be combined to provide a single score for multiple tests so that participants can monitor their performance over time, and discuss the use of the exact binomial test in place of uncertainty when there is no clear consensus.
Original language | English |
---|---|
Pages (from-to) | 263-269 |
Journal | Accreditation and Quality Assurance |
Volume | 24 |
Issue number | 4 |
Early online date | 23 Apr 2019 |
DOIs | |
Publication status | Published - Aug 2019 |
Bibliographical note
© Springer-Verlag GmbH Germany, part of Springer Nature 2019. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details.Keywords
- Proficiency testing
- Quantitative assessments
- a-scores
- z-scores