As state legislators, school district and sports organization officials around the country consider how to implement new concussion policies, computerized neuropsychological ("baseline") testing is often on the table. A new study suggests that while valuable, baseline testing should be used in conjunction with other measures to provide a more complete picture of the state of a concussed athlete's injury.
ImPACT (Immediate Post-Assessment and Cognitive Test), a computerized testing system commonly used to evaluate sports-related concussions misclassified up to 29 percent of healthy participants in a recent study by a kinesiology researcher at the University of Texas at Arlington.
Baseline neuropsychological tests can be administered via computer, paper/pencil or through the use of a new app. Typical measurements include memory, reaction time, and information processing speed. Taken before an athlete's season begins, the test provides pre-injury cognitive performance information against which post-injury test results can be compared. Medical providers will often use these data to help determine a return-to-play schedule for the athlete.
Forty-five healthy, non-athlete students between the ages of 18-24 who had not suffered a concussion within the previous six months were given a health questionnaire which included a symptom inventory. Green's Word Memory Test was then administered for five minutes before the subjects took the ImPACT test, and then again afterward. Students participated in this testing procedure on three separate occasions, first establishing a "baseline" (day 1), then 45 days later, and again five days later, at day 50.
"The results from the study of the ImPACT computerized neuropsychological testing system emphasize the need for multiple types of assessments," said Jacob Resch, an assistant professor of kinesiology and director of the University's Brain Injury Laboratory. During the study, the system had "only poor to good reliability" in 45 healthy participants, he said.
Visual motor speed was the only category out of five on the ImPACT test that met the researchers' measure of reliability. This corresponds to the X's and O's, symbol match, and "three letters" modules on the ImPACT test. Four of the five main outcome measures – composite and verbal memory, visual memory, and reaction time did not meet the criteria established by the researchers.
These findings are similar to those reported by Dr Steven Broglio (University of Michigan) in a 2007 article published in the Journal of Athletic Training. In that investigation, college students were given three common computer based tests (ImPACT, Headminder CRI, and Concussion Sentinel) at Days 1 and 45 with misclassification rates ranging from 20 to 38%. "It is good to see results that are similar to what we reported. I think when the two studies are looked at simultaneously, clinicians should be very cautious in using only one form of testing when evaluating for concussion," Broglio said.
Resch concludes the same. The error rate is less than 10% when all three testing modalities are utilized, that is, when a concussed athlete is evaluated for symptoms, balance and physical coordination, and cognitive ability.
"What our results stress is that when developing a sport-related concussion management protocol, a multi-facet approach including self-reported symptomology, a balance assessment and computerized neuropsychological testing should be implemented," Resch said. "Ultimately, by incorporating this approach to concussion management we are significantly reducing the risk of returning an athlete back to the playing field too soon after a concussion which may lead to catastrophic consequences."
Resch began to study the test's reliability while at the University of Georgia. Co-authors on an abstract presented at the conference include University of Georgia faculty and researchers from the Shepherd Center in Atlanta and Georgia Neurological Surgery in Athens, Ga.
"Research like this reinforces the need to look critically at what we do and make it better," said Dr. Kimberly Walpert, coauthor of the study and a partner at Georgia Neurological Surgery. "I think this is particularly important in view of the growing evidence regarding how multiple, seemingly insignificant brain injuries may have long term consequences for athletes."
Resch, who is now using ImPACT in a lab study of North Texas high school athletes at risk for concussion, said researchers wanted to add to the limited scientific examination of the ImPACT test's reliability. "When we administer the ImPACT test to the high school athletes we try to provide the optimal environment to ensure a valid baseline effort. We deliver the ImPACT individually post-injury. We test no more than 20 athletes at a time in a quiet, distraction-free computer lab in the presence of a trained clinician."
The clinician explains the test more thoroughly and describes in greater detail what the athlete will see on the screens.
Resch's findings were presented in June at the National Athletic Trainers' Association annual meeting and clinical symposia in New Orleans.
Dr. Michael Koester, Director of the Slocum Sports Concussion Program in Eugene, Oregon said, "This study highlights the difficulty in correctly diagnosing concussion, and more importantly, determining safe return to play, more so than it points out any "flaws" in ImPACT. No one uses ImPACT as a single determinant in concussion management. It is simply one tool used to help make a clinical decision."
Co-founder Dr. Michael Collins responds: "We commend Dr. Resch and colleagues for researching computerized neurocognitive testing, and, in fact, many other independent, peer-reviewed studies have already been published examining this important psychometric issue. For example, in a large independent study of college athletes, Schatz (2009) reported that good stability occurred with the ImPACT test battery and that the study "documented the stability of baseline pre-season cognitive assessments over a period of two years" and that 'mean ImPACT composite scores and symptoms showed little variation between the two assessments". Schatz concluded that ImPACT is a reliable measure of cognitive function in assessing sports related concussion. Moreover, another, significantly larger independent study involving nearly 400 athletes recently was accepted for publication and is forthcoming. In this study, test-retest correlations for the online version of ImPACT have been found to be good to excellent. In fact, reliability of the online version of the program was found to be significantly improved relative to the 2009 study, where the desktop version of ImPACT was utilized.
Miller and colleagues (2007) also tested athletes with ImPACT at pre-season, mid-season and post-season and found stability of these scores throughout the season in non-concussed athletes:"ImPACT scores are not significantly altered by a season of repetitive contact in collegiate football athletes who have not sustained a concussion". In short, these and numerous others among the nearly 100 published studies utilizing ImPACT, have systematically showed the stability, validity, sensitivity and added value of this system in assessing sports-related concussion.
Although Dr. Resch's study has yet to be peer reviewed or published and we are not privy to the data or results, it is important to note that the study was conducted in a smaller group of athletes than previous studies (n = 45), and perhaps, most important, it appears these researchers conducted memory testing with an alternative measure just prior to administering ImPACT. This is similar to the referenced work conducted by Broglio and colleagues, where they administered ImPACT, as well as two other computerized and paper-and-pencil neurocognitive measures. In short, it is not surprising at all to see less than adequate reliability with these methodologies. As ImPACT is a cognitive tool, and these other measures utilize alternative/interfering memory stimuli, there are significant interference effects with the data. We would not expect ImPACT to be reliable in conjunction with other memory tests administered, as interference effects are fully expected to occur with such a methodology. This likely explains why the ImPACT memory reliability scores in the Resch and Broglio studies are suboptimal. In clinical practice, we would never administer competing memory stimuli just prior to administering ImPACT, as this would surely affect the integrity of the scores.
Within the context of the outlined comments, however, we completely concur with Dr. Resch in stating that multiple assessment modalities should be utilized in evaluating and treating sports concussion. Dr. Resch reaches the same overall conclusion as ImPACT has practiced for years, and he summarizes our approach almost to the letter. We advocate the use of the ImPACT test as a tool — one of several in the clinician's toolbox for assessing and managing concussion. We always conduct a very detailed clinical evaluation, including an extensive clinical interview, vestibular and balance testing, as well as computerized (and sometimes paper and pencil) neurocognitive testing that help to elucidate the overall clinical picture. With all of these data, we attempt to make responsible decisions regarding injury severity, considerations for academic accommodation during recovery, appropriate physical exertion levels for the concussed athlete, and for ultimately determining when athletes may return to play in a safe manner. Concussion management deserves a detailed, comprehensive and well informed evaluation — and ImPACT is one tool to help determine these issues. We agree wholeheartedly with his comments in this respect."