Content » Vol 50, Issue 6

Original report

MEASURING COGNITIVE ASSESSMENT AND INTERVENTION BURDEN IN PATIENTS WITH ACQUIRED BRAIN INJURY: DEVELOPMENT OF THE “HOW MUCH IS TOO MUCH?” QUESTIONNAIRE

Jennifer C. Tomaszczyk, PhD1, Bhanu Sharma, MSc1,2, Albert A. Chan, MCP1,3, Brenda Colella, MA1, Jenkin N. Y. Mok, MA4, Dorcas Beaton, PhD5, Bruce K. Christensen, PhD, Cpsych6 and Robin E. A. Green, PhD, Cpsych1,7

From the 1Research Department, Toronto Rehabilitation Institute, 2Department of Medical Sciences, McMaster University, 3Adler School of Professional Psychology, Vancouver Campus and Research Department, Toronto Rehabilitation Institute, 4University of Toronto Scarborough, 5Department of Occupational Sciences and Occupational Therapy, Graduate Departments of Rehabilitation Sciences, and the Institute for Health Policy, Management and Evaluation, University of Toronto, 6Research School of Psychology, Australian National University, and 7Department of Psychiatry, University of Toronto, Toronto, Canada.

Abstract

Objective: To design and preliminarily test a questionnaire intended to measure patient treatment burden resulting from participation in cognitive assessments and interventions.

Methods: An expert consensus process was used to develop the concept of patient treatment burden and to determine the first set of questionnaire items and administration protocol. The pilot questionnaire was administered to 20 patients with mild to severe acquired brain injuries on completion of a 2-h or longer neuropsychological assessment. Following preliminary testing, the questionnaire was revised and re-evaluated by a second expert panel and content validity was assessed.

Results: Burden was defined as psychologically and/or physically aversive symptoms in response to cognitive assessment or intervention. The first questionnaire contained 21 items assigned to 3 categories: physical, cognitive, and emotional. Eighty-five percent of patients endorsed symptom level increases, with “tired/fatigued” the most frequently endorsed item (80% of patients). Instructions and test items were easily understood, and the questionnaire was quick to administer. Content validity ratio (CVR) of the revised questionnaire yielded 23 acceptable items and a subset met the highest CVR threshold (>0.78).

Conclusion: This patient-reported outcome will ultimately help patients give voice to aversive experiences, and help clinicians and researchers to monitor and adapt assessments/treatments appropriately. Future steps in development are described.

 

Key words: acquired brain injury; neuropsychology; cognitive assessment; neurorehabilitation; patient-reported outcome; patient burden; questionnaire.

Accepted Mar 22, 2018; Epub ahead of print May 28, 2018

J Rehabil Med 2018; 50: 519–526

Correspondence address: Robin E. A. Green, Toronto Rehabilitation Institute, Research Department, 550 University Avenue, Toronto, Ontario, M5G 2A2, Canada. E-mail: robin.green@uhn.ca

Lay Abstract

There are currently no tools for measuring adverse effects (e.g., fatigue, stress) of cognitive testing and interventions in patients with acquired brain injury (ABI). We designed a preliminary questionnaire for patients with ABI for measuring adverse effects of cognitive testing and interventions, and administered it to 20 patients who had completed intensive cognitive assessments. The questionnaire asked patients about negative cognitive, physical and emotional symptoms resulting from the session. Eighty-five percent of patients reported worsening of at least one symptom, with feelings of tiredness/fatigue most common. The questionnaire was then revised, and experts were asked to rate the appropriateness of items.  The questionnaire’s development is ongoing. Such a questionnaire is needed to enable patients to voice any aversive experiences of cognitive testing/intervention, and to enable clinicians and researchers to monitor and adapt assessments and treatments appropriately. The tool is particularly relevant for remote (e.g. internet-based) assessment and intervention delivery.

Introduction

Patient treatment burden has been described as the “work” involved in being a patient (1). This concept includes the investment of time, the mental and physical effort allocated to activities, such as drug management and self-monitoring, and the impact of the above on a patient’s quality of life and functioning (2).

The physically and/or psychologically aversive symptoms that are the immediate result of cognitive assessments and interventions constitute what may be considered a sub-type of patient treatment burden. Such symptoms include headache, fatigue and irritability. While a number of tools assessing treatment burden have been developed (see Eton et al. (2), for systematic review), to date, there is a gap in the literature for tools that measure the aversive effects of cognitive assessments and interventions.

Such a patient-reported outcome (PRO) could allow for patients undergoing such procedures to give voice to aversive experiences, and for clinicians and researchers to monitor whether a cognitive assessment/intervention should be modified on the basis of patient experience. This is especially important as some neurorehabilitation interventions are intended to elicit maximal mental exertion. For example, cognitive environmental enrichment paradigms confer neural and behavioural benefits in part through continuous challenge and time-intensive training (3, 4); moreover, there is increasing evidence that too much cognitive exertion may have deleterious effects in certain contexts (e.g. acute concussion (5–7)).

Such a PRO would support a range of research into the impact of aversive experiences. For example, do aversive experiences in one therapy result in less engagement in subsequent therapies and, if so, does this occur at a certain threshold or intensity of burden? Does aversive experience impact feasibility and efficacy of an intervention and, if so, how? Are aversive effects of intensive cognitive activity in acute concussion associated with poorer recovery?

There are several contexts in which this type of PRO would be of particular value. This includes the context of remote assessment and treatment (i.e. telephone/internet-delivered), which may preclude direct patient observation, and face-to-face group settings, in which social pressures may preclude the open acknowledgement of aversive symptoms.

Thus, a tool that quickly and easily measures symptoms of patient burden attributable to cognitive assessment/intervention could provide the therapist/experimenter with pertinent clinical information, permitting timely and appropriate management. Such a tool could also foster a greater understanding of the potential benefits and harms of cognitive assessments and interventions, particularly highly demanding ones, helping us to ascertain “how much is too much?” for a given patient or population.

As a step towards gaining a better understanding of the concept of patient treatment burden in the above-described contexts, and of documenting its presence, we have taken an expert consensus-driven and empirical approach to the initial stages of development of a measure of patient treatment burden called the How Much is Too Much? questionnaire. The questionnaire was specifically designed for patients with acquired brain injury (ABI), although it has potential application to any cognitively impaired population.

The primary aims of this paper are: (i) to briefly describe the expert consensus process used in the development of the construct and first questionnaire items; (ii) to describe pilot testing of the tool and report rate of item endorsement in patients with ABI; and, (iii) to describe item selection agreement between expert raters of a revised questionnaire using the content validity ratio (CVR) item statistic, a measure of content validity.

MATERIAL AND METHODS
Expert consensus process for item selection and tool format

Procedures. The expert consensus process was concerned with: (i) establishing the need and purpose for the questionnaire; (ii) refining the psychological construct of patient treatment burden in the context of cognitive assessment/intervention; and (iii) generating items, instructions, and format for the first version of the questionnaire (see Box 1). To achieve these aims, a panel of 12 experts convened: 3 front-line clinicians, 6 clinician-scientists, and 3 post-doctoral fellows from disciplines that undertake cognitive assessment/intervention in patients with ABI, including concussion, moderate-severe traumatic brain injury (TBI) and stroke. Disciplines included neuropsychology (3), clinical psychology (4), psychometry (2), occupational therapy (2) and rehabilitation therapy (1); the panel also contained expertise in methodology and test development. One of the panel members had also previously sustained a severe TBI. The current study sought to convene a panel with relevant content knowledge from their clinical training as well as extensive front-line clinical experience with ABI patients and their caregivers.

Generation of the initial item set entailed the generation of an exhaustive list of items by a sub-group of the expert panel. An iterative process was used to identify redundancy as well as items that might be ambiguous for patients. Each item was evaluated by the original panel of 12 experts, during which suggestions for adding/removing items, as well as tool format and instructions, were given.

Pilot testing of questionnaire

Participants. The study received approval from the Toronto Rehabilitation Institute Research Ethics Board (Jan 2015, REB#14-8156-DE). The first version of the tool was administered by psychometrists (n = 2) and clinical neuropsychology trainees (n = 5) trained in clinical assessment, including the administration of questionnaires, and with extensive experience with brain-injured patients. These clinicians were also members of the expert consensus panel.

Patient participants were a convenience sample. All were undergoing clinical neuropsychological assessments for mild to severe ABI, either as part of an ongoing clinical research study or as part of a clinical assessment on an out-patient ABI programme. Inclusion criteria were: history of ABI; aged 18 years or older; sufficient complaints of cognitive dysfunction to warrant neuropsychological assessment; able to provide fully informed consent; functional command of English. Participants with active psychotic disorder were excluded. Patients with moderate-severe TBI or stroke were previous in-patients or day hospital patients of the Acquired Brain Injury Program of Toronto Rehab. Patients with persisting symptoms of concussion (or post-concussion syndrome; PCS) were recruited from a workshop designed for patients with persisting symptoms of concussion. There was no independent medical verification of the initial concussion or persisting symptoms.

Clinician evaluation form. This form comprised a series of questions to enable clinicians administering the questionnaire to: (i) provide feedback on the scale and its administration that would be used to evaluate the feasibility of the questionnaire and its current application; and (ii) to collect basic demographic data on patients.

Procedures. The purpose of the pilot testing was to ascertain which questionnaire items were most frequently and infrequently endorsed, and to what degree, in order to inform further refinement of items in the scale. For those items with an endorsement, we were also interested in whether items would show any preliminary evidence of clustering.

Patient participants were recruited by the above-mentioned clinicians and all patients provided informed consent. The cognitive sessions selected for initial piloting of the tool were clinical neuropsychological assessments because of the known cognitive demands of these assessments, which are designed to test the limits of capacity across multiple cognitive domains.

Assessments ranged from 2 to 5 h and comprised conventional clinical neuropsychological measures of attention, concentration, and speed of processing, memory function, visuospatial and language skills, executive functions, and estimated pre-morbid IQ. The first version of the questionnaire was administered at the end of the neuropsychological assessment session. After the patient left, the clinician then completed the clinician evaluation form.

Content validity ratio of scale items

Procedures. Based on the initial pilot testing, a sub-set of the expert panel re-convened to refine the constructs from the original scale, and then Lawshe’s (8) CVR was used to establish the baseline content validity of test items. The CVR is an internationally recognized item-statistic used for establishing content validity (9). It involves a linear transformation of the proportional level of agreement on item level ratings between expert panellists. The analysis and data collection was informed by Gilbert & Prion (10).

Ratings were collected from an expert panel (comprising members that overlapped with the original consensus panel) by asking: “Please rate each item for its appropriateness for measuring burden, rating each item as, (2) Essential, (1) Useful, but not essential, or (0) Not Necessary or redundant, where “Burden” referred to “any elevation in aversive psychological or physical/somatic symptoms (e.g. headache, fatigue, irritability) that you think a brain-injured patient might experience following a cognitive assessment/intervention, whether therapist delivered (face-to-face or online) or self-administered (e.g. a computerized brain exercise).”

The clinicians in this panel had expert and front-line experience in treating patients with brain injuries. The experts comprised: 4 neuropsychologists, a psychologist in post-doctoral training in neuropsychology, a doctoral student intern in neuropsychology, 2 occupational therapists, 2 social workers, 2 rehabilitation therapists, and 2 speech-language pathologists. One of the clinicians had previously sustained a moderate-severe TBI. Years of experience ranged from 5 to 21 years. Their ratings were coded and anonymized before analysis.

CVR values were calculated using Lawshe’s (8) original formula:

CVR = (ne – N/2) / (N/2)

where ne is the number of panellists rating an item as “Essential” and N is the total number of panellists. In other words, the more panellists that agreed with an item being “Essential”, beyond the first half of raters, the greater the degree of that item’s content validity.

For CVR value interpretation, general principles described in Lawshe’s original paper (8) were used, as informed by suggestions by Polit et al. (11), with additional reference to updated critical values recalculated by Ayre & Scally (12). CVR cut-offs had been previously recalculated in Wilson et al. (9), but were not used, due to limitations described by Ayre & Scally (12).

RESULTS
Expert consensus panel for item selection and tool format

The need for the tool was unanimously agreed upon by panel members, given the clinical importance of a patient-centred expression of burden, the need to better understand the impact(s) of burden, as described above, and the dearth of existing instruments. Broad definitions of burden were produced, and the purpose of the tool initially conflated measuring patient treatment burden with measuring its impact. In smaller follow-up meetings, the working definition was refined and simplified to “symptoms experienced by the patient as physically or psychologically aversive in response to cognitive assessment or intervention” and we established that the tool would be designed as a “yardstick” for burden, one that could be used to quantify patient treatment burden, but did not directly measure the impact of burden itself. More concretely, we differentiated between measuring the presence and severity of symptoms that a patient experiences in response to assessment/intervention vs. the consequences or impact of those symptoms. The latter might include adherence to the training protocol for which the burden was measured, the effect of symptoms on ability/willingness to participate in a subsequent activity or deleterious effects on the brain (e.g. during the early post-concussion period).

Before and during the initial meeting, a set of 39 items was generated by members of the panel. These were intended to represent as wide a range as possible of aversive symptoms that might be experienced by patients in response to cognitive assessment/intervention and thereby to represent different components of burden.

The iterative process yielded a final set of 21 items that were included in the first version of the tool. The items were categorized provisionally as physical, emotional, cognitive or other. These initial categories were based on subjective groupings rather than hypothesized psychometric construct dimensions. The purpose of this initial grouping was to allow for the generation of hypotheses for future iterations of the questionnaire, and thus degree of representativeness of the items of the categories was not ascertained.

A consensus was reached with agreement from all panel members regarding the format of the questionnaire as well as the delivery protocol, including instructions, and these recommendations were used to construct the How Much is Too Much? questionnaire, version 1. This first version of the questionnaire included 21 items from 3 broad categories. There were 7 “physical” items, including: tired/fatigued; eye-strain/blurred vision; headache; pain (other) e.g. neck, arm, hand; ear ringing/popping; off balance (physically); dizzy/lightheaded. There were 10 “emotional” items: overwhelmed; frustrated; upset; stressed; racing thoughts; irritable; anxious; embarrassed; lonely/isolated; and sad. There were 2 “cognitive” items: foggy and distractible, and 2 additional items that did not fall into these categories: uncomfortable and bored. The response scale was a 4-point Likert-type scale, with the labels: “less”, “same”, “more”, and “much more” for patients to rate their current (post-session) symptoms with respect to pre-session symptoms. Conceptually, higher summed total scores across the 4-item categories and within each category should represent higher levels of burden. The instructions for the questionnaire were: “Please rate the extent to which you are experiencing any of the following. Please simply compare yourself to how you felt before starting the session”.

Pilot testing of questionnaire

Two patients did not meet study inclusion criteria of ABI (P10 and 17) and were therefore excluded from further analyses. Twenty patients with ABI participated in the study, 60% of which had PCS. All patients but one spoke English as a first language and their English language capacity was described as “fluent”. One patient spoke English as a second language (P9) and his/her language skill was described as semi-fluent. See Table I for patient characteristics.


Table I. Patient characteristics

Item endorsements. Eighty-five percent of patients endorsed an increase of at least one symptom. Fig. 1 shows the number of patients endorsing changes in burden increase on the scale (i.e. endorsements of “more” or “much more”). For those items endorsed, physical items predominated; 80% of patients endorsed “tired/fatigued”, 30% endorsed “eye-strain/blurred vision” and “headache”, while 25% endorsed “pain (other)” and “dizzy/lightheaded”. Regarding emotional items, 30% of patients endorsed “overwhelmed”, and 35% endorsed “stressed”; none endorsed “sad”. For cognitive items, 25% of patients endorsed “distractible” and 45% endorsed “foggy”.


Fig. 1. Number of patients endorsing each of the 21 questionnaire items (maximum count of 20 patients per item). Counts of patients rating each item as either “more” (experiencing pre–post session increases in magnitude of a given symptom) or “much more” (experiencing greater pre–post session increases in magnitude of a given symptom), are shown.

Patterns of endorsements. Next, item endorsements within each patient were examined. The largest number of symptoms endorsed by a patient as “more” or “much more” were 13 and 14 (out of 21), by 2 patients. Patient 5 showed an increase in 14 symptoms and patient 9 showed an increase in 13 symptoms. Three patients showed no endorsements. Many of the items, even with this vulnerable population and under a challenging assessment, showed no endorsements, and 30% of patients showed a reduction of at least one aversive symptom. Fig. 2 shows the number of symptoms endorsed and not endorsed by each patient. Overall, 85% (17 of the 20 eligible patients) showed endorsement of at least one symptom increase (i.e. “more” or “much more”).


Fig. 2. Counts of symptoms rated in each of 2 categories, for each patient. For the “Less or same” category, patients indicated either no change or pre–post session decreases in magnitude of a given symptom; For the “More or much more” category, patients indicated experiencing pre–post session increases in magnitude of a given symptom or greater pre–post session increases in magnitude of a given symptom. Maximum count of 21 items per patient.

As an exploratory analysis, we examined whether specific items were more frequently endorsed by patients who made a greater (5 or more items) vs. fewer (4 or fewer items) total number of item endorsements. For those with fewer overall endorsements, more physical symptoms were endorsed than non-physical, even though there are a greater number of non-physical items. Specifically, 5 of the 8 patients who endorsed 4 or fewer items showed this pattern, and these patients reported pre–post session increases in tiredness/fatigue, eye-strain/blurred vision, or headaches. In these patients, tiredness/fatigue co-occurred with either eye-strain/blurred vision or with headaches. Interestingly, all 3 of the patients who endorsed only 1 symptom endorsed “tired/fatigued”, and all but 1 of the 8 patients endorsed “tired/fatigued”. For those patients showing a greater overall number of endorsements (i.e. 5 or more items), 5 of 9 endorsed more non-physical than physical symptoms, with frequently endorsed items including: “foggy, stressed, and overwhelmed”, which tended to cluster together. Regarding physical symptoms, all 9 patients reported increases in “tiredness/fatigue”, and frequently reported “eye-strain/blurred vision, headaches, pain (other), and feeling dizzy/lightheaded”, which tended to cluster together.

Preliminary feasibility: clinician evaluation form. The mean administration time for the questionnaire was 1 min 42.1 s (range 30 s to 6 min). There were no difficulties reported in understanding the tool, with the exception of 3 patients: the patient with English as second language for whom some items needed to be defined (e.g. “overwhelmed”), 1 patient who requested clarification on how to indicate absence of a symptom before and after the cognitive testing session, and 1 patient who requested clarification regarding whether to report changes relative to the start of the current testing session or to the post-injury period. All clinicians indicated that the reading level of the questionnaire was appropriate for their patients.

Clinician feedback for improving the questionnaire included: (i) differentiating items as relating to effects of the cognitive testing session vs. effects of patient-clinician rapport (e.g. embarrassed); (ii) modifying the instructions to read “Please simply compare yourself to how you felt before starting today’s session”; and (iii) expanding the response scale to include “symptom not experienced pre- or post-assessment”.

Content validity ratio of scale items

The sub-set of the expert panel that re-convened to refine the constructs from the original scale expanded the initial items to include the items listed in Fig. 3. For example, as 80% of patients endorsed the item “tired/fatigued”, we sought to refine this construct by adding a number of items that examined cognitive vs. physical fatigue (e.g. “mentally sluggish/slowed down” vs. “sleepy/drowsy”). No items were excluded at this stage.

Fig. 3 illustrates that more than half of the items (23/40) were rated as “essential” by more than 50% of the raters. Four of the items were rated as “useless or redundant” by at least half of the raters.


Fig. 3. Panellist endorsement of all items in revised questionnaire, listed in alphabetical order.

The CVR values are summarized in Table II, and provide an indication of the degree of agreement between members of the consensus panel regarding appropriateness or inappropriateness of the item. According to Lawshe (8), CVR values greater than 0 give some assurance of content validity. Twenty-one items met this recommendation. Using a higher standard of 0.78 or more has been recommended by Polit et al. (11) for panels with 3 or more raters. Five of the items met this criterion with high “essential” agreement (“tired/fatigued”; “trouble staying focused/concentrating”; “frustrated”; “headache”; “overwhelmed”). A number of further items approached the criterion. Using Ayre & Scally’s (12) re-calculated critical CVR values using exact binomial probabilities assures a level of agreement beyond that of chance (α = 0.05) matched to the number of panellists contributing to each item. This was also used to evaluate the items. For items with 14 raters, the CVRCritical = 0.571. For those items missing 1 rater, the adjusted cut-off was CVRCritical = 0.538. Using these values, the following items continued to meet the cut-off for item appropriateness (i.e. “frustrated”, “headache”, “mentally sluggish/slowed down”, “overwhelmed”, “pain (other): e.g. neck, arm, hand”, “sad”, “tired/fatigued”, and “trouble staying focused/concentrating”).


Table II. Lawshe content validity ratio (CVR) values for candidate items

DISCUSSION

Cognitive assessments and interventions are widely used in the ABI population, but a tool to measure their potentially burdensome impact on patients has yet to be developed. We developed a pilot questionnaire to do so. We found that a large majority of our patients with ABI (85%) experienced increases in their symptom levels related to participation in cognitive assessments, with “tired/fatigued” the most frequently endorsed symptom. The tool was found to be straight-forward to administer by clinicians. Simple modifications to instructions based on clinician feedback will be incorporated in subsequent iterations to make the questionnaire even easier for patients to complete. The mean time for completion of the tool was under 2 min, suggesting that such a tool could easily be incorporated into face-to-face or remote assessment/treatment contexts.

Based on the initial findings we added further questionnaire items, many pertaining to fatigue, as we wished to clarify the type(s) of fatigue experienced (e.g. mental/physical). The expanded version contained 40 items. The Lawshe (8) CVR was employed to establish the content validity of items. Some definitive information for future retention or deletion of items was provided by a subset of 23 items, which more than half of the consensus panel rated as essential; of those, the items “tired/fatigued; trouble staying focused/concentrating; frustrated; headache; overwhelmed” met the highest content validity criterion, with a CVR index of > 0.78 (11, 12).

Further development and scale refinement would incorporate iterative CVR ratings, with adherence to a structured, anonymous process, such as the Delphi method, in which item generation and reduction are guided by sources of information such as the CVR.

While some items showed low CVR and few endorsements as “essential” (e.g. worn out, pain, weakness, embarrassed), we will consult with an additional expert panel to discuss retention of items (or addition of new ones) concerning symptoms considered to be rare, but of high clinical importance. Also of interest is whether the tool can inform clinicians of a threshold that reflects “too much” patient burden, which may manifest as a total score above the threshold and/or high levels of specific symptoms. It is interesting that patients who endorsed relatively fewer items endorsed predominantly physical items, specifically tiredness/fatigue, eye-strain/blurred vision, and headaches, whereas patients who endorsed relatively more items also tended to endorse these physical items in addition to greater numbers of non-physical items. One hypothesis generated from this pattern of item endorsement is that there is an order of symptoms that characterizes increasing burden from cognitive assessments. For example, burden from a neuropsychological assessment may first manifest as tiredness, progressing to symptoms of feeling foggy, stressed, and overwhelmed. The presence of these latter symptoms may suggest that a burden threshold has been reached; the impact/consequences of reaching the threshold would require empirical investigation.

Limitations of the current study include the use of a small sample of patients with ABI (precluding more rigorous psychometric analysis of the tool), the lack of a standardized method for item selection, and limited characterization of the patient sample. These limitations will be addressed in future stages of development, which will include administration of the revised questionnaire to a broader cross-section and a larger number of patients with ABI. Importantly, we will obtain patient input on item addition/retention and questionnaire instructions through cognitive interviews in order to better characterize how patients experience burden.

In conclusion, we have introduced a novel context for the measurement of patient treatment burden (cognitive assessment/intervention burden in ABI) and have taken the first steps in the development of a questionnaire for measuring this burden. The tool is intended to ultimately help patients, clinicians and researchers to recognize burden, and to evaluate how burden may impact a patient’s capacity to benefit from the assessment or treatment, whether there are deleterious psychological or neurological consequences of burden, and the possible impact of burden on subsequent therapeutic and non-therapeutic activities.

REFERENCES
  1. Tran V-T, Harrington M, Montori VM, Barnes C, Wicks P, Ravaud P. Adaptation and validation of the Treatment Burden Questionnaire (TBQ) in English using an internet platform. BMC Medicine 2014; 12: 109.
    View article    Google Scholar
  2. Eton DT, Elraiyah TA, Yost KJ, Ridgeway JL, Johnson A, Egginton JS, et al. A systematic review of patient-reported measures of burden of treatment in three chronic diseases. Patient Relat Outcome Meas 2013; 4: 7.
    View article    Google Scholar
  3. Curlik DM, 2nd, Shors TJ. Learning increases the survival of newborn neurons provided that learning is difficult to achieve and successful. J Cogn Neurosci 2011; 23: 2159–2170.
    View article    Google Scholar
  4. Dalla C, Bangasser DA, Edgecomb C, Shors TJ. Neurogenesis and learning: acquisition and asymptotic performance predict how many new cells survive in the hippocampus. Neurobiol Learn Mem 2007; 88: 143–148.
    View article    Google Scholar
  5. Giza CC, Hovda DA. The new neurometabolic cascade of concussion. Neurosurgery 2014; 75 Suppl 4: S24–33.
    View article    Google Scholar
  6. Henry LC, Tremblay S, Boulanger Y, Ellemberg D, Lassonde M. Neurometabolic changes in the acute phase after sports concussions correlate with symptom severity. J Neurotrauma 2010; 27: 65–76.
    View article    Google Scholar
  7. Covassin T, Crutcher B, Wallace J. Does a 20 minute cognitive task increase concussion symptoms in concussed athletes? Brain Inj 2013; 27: 1589–1594.
    View article    Google Scholar
  8. Lawshe CH. A quantitative approach to content validity. Personnel Psychol 1975; 28: 563–575.
    View article    Google Scholar
  9. Wilson FR, Pan W, Schumsky DA. Recalculation of the critical values for Lawshe’s content validity ratio. Meas Eval Counsel Dev 2012; 45: 197–210.
    View article    Google Scholar
  10. Gilbert GE, Prion S. Making sense of methods and measurement: Lawshe’s Content Validity Index. Clin Simulat Nurs 2016; 12: 530–531.
    View article    Google Scholar
  11. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Rese Nurs Health 2007; 30: 459–467.
    View article    Google Scholar
  12. Ayre C, Scally AJ. Critical values for Lawshe’s content validity ratio: revisiting the original methods of calculation. Meas Eval Counsel Dev 2014; 47: 79–86.
    View article    Google Scholar

Comments

Do you want to comment on this paper? The comments will show up here and if appropriate the comments will also separately be forwarded to the authors. You need to login/create an account to comment on articles. Click here to login/create an account.