Abstract
INTRODUCTION: Sample size 'rules-of-thumb' for external validation of clinical prediction models suggest at least 100 events and 100 non-events. Such blanket guidance is imprecise, and not specific to the model or validation setting. We investigate factors affecting precision of model performance estimates upon external validation, and propose a more tailored sample size approach.
METHODS: Simulation of logistic regression prediction models to investigate factors associated with precision of performance estimates. Then, explanation and illustration of a simulation-based approach to calculate the minimum sample size required to precisely estimate a model's calibration, discrimination and clinical utility.
RESULTS: Precision is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration. The simulation-based calculation accounts for the LP distribution and (mis)calibration in the validation sample. Application identifies 2430 required participants (531 events) for external validation of a deep vein thrombosis diagnostic model.
CONCLUSION: Where researchers can anticipate the distribution of the model's LP (e.g. based on development sample, or a pilot study), a simulation-based approach for calculating sample size for external validation offers more flexibility and reliability than rules-of-thumb.
Original language | English |
---|---|
Pages (from-to) | 79-89 |
Number of pages | 11 |
Journal | Journal of Clinical Epidemiology |
Volume | 135 |
Early online date | 14 Feb 2021 |
DOIs | |
Publication status | Published - 1 Jul 2021 |