TY - JOUR
T1 - The reliability of the serial reaction time task
T2 - meta-analysis of test-retest correlations
AU - Oliveira, Cátia M
AU - Hayiou-Thomas, Marianna E
AU - Henderson, Lisa M
N1 - © 2023 The Authors.
PY - 2023/7/19
Y1 - 2023/7/19
N2 - The Serial Reaction Time task, one of the most widely used tasks to index procedural memory, has been increasingly employed in individual differences research examining the role of procedural memory in language and other cognitive abilities. Yet, despite consistently producing robust procedural learning effects at the group level (i.e. faster responses to sequenced/probable trials versus random/improbable trials), these effects have recently been found to have poor reliability. In this meta-analysis (
N = 7), comprising 719 participants (
M = 20.81, s.d. = 7.13), we confirm this 'reliability paradox'. The overall retest reliability of the robust procedural learning effect elicited by the SRTT was found to be well below acceptable psychometric standards (
r < 0.40). However, split-half reliability within a session is better, with an overall estimate of 0.66. There were no significant effects of sampling (participants' age), methodology (e.g. number of trials, sequence type) and analytical decisions (whether all trials were included when computing the procedural learning scores; using different indexes of procedural learning). Thus, despite producing robust effects at the group-level, until we have a better understanding of the factors that improve the reliability of this task using the SRTT for individual differences research should be done with caution.
AB - The Serial Reaction Time task, one of the most widely used tasks to index procedural memory, has been increasingly employed in individual differences research examining the role of procedural memory in language and other cognitive abilities. Yet, despite consistently producing robust procedural learning effects at the group level (i.e. faster responses to sequenced/probable trials versus random/improbable trials), these effects have recently been found to have poor reliability. In this meta-analysis (
N = 7), comprising 719 participants (
M = 20.81, s.d. = 7.13), we confirm this 'reliability paradox'. The overall retest reliability of the robust procedural learning effect elicited by the SRTT was found to be well below acceptable psychometric standards (
r < 0.40). However, split-half reliability within a session is better, with an overall estimate of 0.66. There were no significant effects of sampling (participants' age), methodology (e.g. number of trials, sequence type) and analytical decisions (whether all trials were included when computing the procedural learning scores; using different indexes of procedural learning). Thus, despite producing robust effects at the group-level, until we have a better understanding of the factors that improve the reliability of this task using the SRTT for individual differences research should be done with caution.
U2 - 10.1098/rsos.221542
DO - 10.1098/rsos.221542
M3 - Article
C2 - 37476512
SN - 2054-5703
VL - 10
JO - Royal Society Open Science
JF - Royal Society Open Science
IS - 7
M1 - 221542
ER -