Introduction: Distance or remote cognitive assessments, administered via phone or computer platforms, have emerged as possible alternatives to traditional assessments performed during office visits. Distance refers to any nontraditional assessment feature, not only or necessarily location. We conducted a systematic review to examine the psychometric soundness of these approaches. Method: We searched PubMed, PsycINFO, AgeLine, and Academic Search Premier for articles published between January 2008 and June 2020. Studies were included if participants were over the age of 50, a structured assessment of cognitive function in older adults was evaluated, the assessment method was deemed distant, and validity and/or reliability data were reported. Assessment distance was defined as having any of the following features: use of an electronic test interface, nonroutine test location (e.g., home), test self-administered, and test unsupervised. Distance was categorized as low, intermediate, or high. Results/Discussion: Twenty-six studies met inclusion criteria. Sample sizes ranged from n = 8 to 8,627, and the mean age ranged from 57 to 83. Assessments included screens, brief or full batteries, and were performed via videoconferencing, phone, smartphone, or tablet/computer. Ten studies reported on low distance, 11 on intermediate distance, and 5 studies for high distance assessments. Invalid performance data were observed with older age and cognitive impairment. Convergent validity data were reported consistently and suggested a decline with increasing distance: r = 0.52–0.80 for low, 0.49–0.75 for intermediate, and 0.41–0.53 for high distance. Diagnostic validity estimates presented a similar pattern. Reliability data were reported too inconsistently to allow evaluation. Conclusion: The validity of cognitive assessments with older adults appears supported at lower but not higher distance. Less is known about the reliability of such assessments. Future research should delineate the person and procedure boundaries for valid and reliable test results.

In the past decade, radical advances have been made in terms of characterizing the neuropathological and clinical progression of AD and AD-related dementias (ADRD) [1]. This progression occurs through 3 distinct disease stages: preclinical or asymptomatic-at-risk, mild cognitive impairment (MCI), and dementia. The MCI stage has been comprehensively studied and variously defined [2], with most definitions converging on cognitive functioning below what is normal for an individual’s age and education, with preservation of functional independence [3] In 2018, the American Academy of Neurology (AAN) estimated the prevalence of MCI to range between 6 and 25% for those over the age of 65 [4], twice as high as the 9–12% estimated for dementia [5]. Frameworks for classifying the preclinical disease stage have also been proposed [6, 7]. A core concept in these frameworks is subjective cognitive decline (SCD), which refers to an individual’s self-experienced persistent cognitive decline and normal performance on standardized cognitive tests. Like MCI, SCD is associated with greater risk of progression to dementia [8]. The prevalence of SCD is variable and has been reported in up to 90% of older adults [9].

It is clear that a large segment of the older adult population experiences worries about their cognition or cognitive impairment. However, only up to half consult a health professional [10]. Health professionals, gerontology experts, and various stakeholder organizations agree that early detection of MCI and dementia due to AD or other neurodegenerative disorders is associated with better outcomes. A recent literature review highlighted the advantages an early diagnosis may have for patients and caregivers, including accessing support services, managing symptoms, and planning for the future. The review recognized the relative dearth of high-quality investigations of psychosocial outcomes associated with such early diagnoses [11, 12]. A Monte Carlo analysis taking into account potential benefits of available pharmacologic and nonpharmacologic interventions suggested that early detection and management of cognitive impairment states generate cost savings for federal and state healthcare systems [13, 14]. These benefits must be weighed against potential harms including misdiagnosis and the socioemotional impact of a dementia diagnosis [15, 16]. Ultimately, the benefits would clearly outweigh the harms only if treatments that can slow the progression of AD pathology emerged: these would likely be most beneficial when administered in the preclinical or early stages of the disease [12].

At present, the diagnosis of AD/dementia is made when there have been noticeable cognitive changes and concern from a family member or clinician, rather than from results of regular screening tests [17]. It has been estimated that cognitive impairment goes unrecognized in half of affected older adults [18]. A systematic review of factors contributing to missed or delayed diagnosis identified a range of problems for physicians, including limited treatment options, concern about stigmatizing patients, reluctances to discuss cognitive problems, difficulty with implementing assessment tools, embarrassment when administering assessments, and difficulty explaining the test results [19]. The same systematic review also indicated patient barriers to recognition of cognitive decline. Among these were problems of access including residing in rural areas and concerns about cost of the assessment, but also worries about receiving a diagnosis including perceived lack of treatment options and fear of negative emotional reactions. Other significant problems, reported in numerous studies, were reluctance to spontaneously mention cognitive problems [20] and refusal of screening [21, 22] or of diagnostic follow-up [23].

There has been a recent surge in the use of telehealth and telemedicine, including applications to cognitive assessments that take place outside of the primary care office, hospital, or clinic [24-26]. These assessments, referred to as remote or distance assessments, are typically administered via telephone, videoconferencing, and other electronic means [24]. Paper-and-pencil tests administered remotely can also be considered distance assessments; however, this is less common in research and clinical practice. It should be noted that we use the term distance here to denote any nontraditional assessment feature, and not necessarily a remote location. Distance assessments have potential advantages over traditional assessments and may be preferred by older adults. Growing numbers of older adults now use technology: it has been estimated that nearly half of adults over the age of 65 own smartphones and approximately two-thirds of older adults use the Internet [27]. Distance cognitive assessments may benefit the older adult population in multiple ways. First, they may provide additional assessment opportunities for those who may have gone unrecognized in the primary care setting due to refusal of screening or diagnostic follow-up [28]. Second, these assessments can be accessed by individuals residing in rural communities and lacking access to primary care or other health resources. Third, distance assessments may mitigate some of the embarrassment experienced during testing in traditional settings by both older adults and their physicians because the former can complete testing on their own in the comfort of their home. These methods may introduce person-centered practices into the assessment process [29]. Fourth, self-administered computerized distance assessments have the potential to contribute to major cost and time savings: the earlier disease detection they would allow could result in more opportunities for preventive intervention and better prognosis and ultimately in higher quality healthcare [14]. Lastly, cognitive assessments that take place at a distance using telehealth methods may be a safer option during public health crises like are the COVID-19 pandemic.

Distance cognitive assessments have been evaluated for research settings including clinical trials, where they have shown feasibility and validity compared to standard assessments provided sufficient training is given to test-takers to become familiar with the interface [30-33]. Distance assessments are not currently recommended for implementation in clinical settings [17, 34] because their psychometric properties across various interfaces remain uncertain. With the increase in use of telehealth and telemedicine in psychology, the American Psychiatric Association (APA) and the American Telemedicine Association (ATA) collaborated to provide a document outlining best practices in clinical videoconferencing in mental health [35]. However, they do not provide any suggestions specific to assessment or cognitive screening. The document does provide specific recommendations for telemedicine use within the geriatric patient population [35]. Both APA and ATA advise that cognitive testing and interviewing techniques be adapted to serve the individual patient’s needs in terms of any hearing of visual impairments [35]. If the patient agrees, they also recommend obtaining report from a family member or caretaker [35].

Threats to the validity/reliability of distance assessments can pertain to the location of testing, the mode of administration, and the testing interface. When testing is performed with individuals in their own home, the environment is obviously less controlled than the laboratory or clinic, with the possibility of unknown distractions and interruptions impacting task performance [24, 36]. Test-takers may also have a family member answer questions for them which would invalidate results of the assessment. When assessments are self-administered, computer skill level may affect speed of responses, putting those with low computer skills at a disadvantage [37]. Test-takers may misunderstand and/or misinterpret instructions, and there may be no opportunity for clinicians to clarify instructions [37]. Interface characteristics, for example, the small screen size of smartphones may lead to a higher error rate compared to tablets or iPads [38].

There is a great need for accessible and cost-effective assessments that can detect cognitive impairment among older adults. The numbers of incident dementia cases are staggering, and adding to these are an unknown number of older adults that are worried about their cognition but may only need reassurance and monitoring. Distance cognitive assessments may address some of this need but must first demonstrate psychometric properties comparable to those of current in-clinic assessments. The primary objective of this systematic literature review was to evaluate the published psychometric properties of distance cognitive assessments, assess their appropriateness for use, and provide ideas for further research in this area. We defined distance cognitive assessment as having any feature outside of conventional in-clinic paper-and-pencil assessment administered by an examiner. Distance features were any (combination) of the following: the test was administered outside of the clinic or research office, the test was self-administered by the test-taker, and the test used phone/computer interface. Based on these criteria, we developed a method for determining the degree of distance for each assessment evaluated in the literature (see Method for Rating Distance). Our review was guided by 3 specific questions about the psychometric properties of distance assessments: (1) Are validity and reliability related to the age and cognitive status of test-takers? (2) Is there a relationship between validity/reliability and degree of distance? (3) Does validity/reliability depend on the type of interface?

Study Design

This study consisted of a systematic review reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement following the Cochrane Handbook for Systematic Reviews of Interventions.

Search Strategy

We searched the PubMed, PsycINFO, AgeLine, and Academic Search Premiere for studies published between January 2008 and December 2018. An updated search was conducted to include articles published between December 2018 and June 2020. Date restrictions were applied, and the search was restricted to publications available in English. The electronic search strategy terms used were (cognitive test* OR cognitive screen* OR neuropsych* assess* OR cognitive assess*) AND (remote OR mobile OR home-based OR telehealth OR telemedicine online OR virtual OR electronic OR computer-based). EndNote was used to record titles, abstracts, and inclusion/exclusion decisions.

Participants

Participants of included studies were required to be over the age of 50 years; they could be with or without cognitive concerns and with or without diagnosis of ADRD or cerebrovascular disease/stroke, according to any recognized diagnostic criteria, for example, ICD-10 or DSM-V. Studies with participants diagnosed with neurological and neurodegenerative disorders not specific to geriatric populations (e.g., Parkinson disease and multiple sclerosis) or mental health conditions (e.g., schizophrenia) were excluded.

Study Criteria

All studies purporting to assess cognitive function in older adults and/or detect MCI or ADRD by means of a structured assessment (screen, brief or full battery) were considered. Studies were included if the cognitive assessment had at least one distance feature (see Method for Rating Distance) and if reliability, validity, or both were reported. Studies using cognitive assessments for different applications (e.g., driving safety evaluation) or without normative data for older adults were excluded.

Selection Procedure, Data Extraction, and Data Management

Titles and abstracts of retrieved studies were screened for inclusion by 2 independent reviewers (D.B. and M.S.). Any disagreements were resolved by discussion with a third reviewer (C.J.). The data extracted for each study included demographic and cognitive status characteristics of participants; assessments used including domains tested; distance features including interface, administration, location of assessment, observation of test-taker; validity measures (convergent and criterion); and reliability measures (test-retest, interrater, and internal consistency). Summary measures included correlation and accuracy ranges.

Method for Rating Distance

The studies were evaluated in terms of degree of distance of the assessment by assigning one point to each of the following features: the test location was home/outside the clinic or research office setting, the test was self-administered by the test-taker, and the interface was a phone or computer. In addition, we considered whether test-takers were on their own and not being observed, regardless of administration mode (i.e., examiner administration could occur with the participant not observed and self-administration could occur with the participant observed). To illustrate, a cognitive test that was self-administered (1 point) on a tablet or computer (1 point) in the participants’ home (1 point), with test-takers on their own (1 point), would be assigned a total distance rating of 4. A test administered by an examiner via videoconferencing (1 point), with participants at home (1 point) and on their own (1 point), would be assigned a total distance rating of 3. A test administered by an examiner over telephone (1 point), with participants at home (1 point) with a family member, would be assigned a total distance rating of 2. A test that was administered by an examiner via mobile device (1 point) in an office setting while the participant was being observed by the examiner is an example of a test that would be assigned a distance rating of 1. Studies with ratings of 1 and 2 were considered low, studies with a rating of 3 were considered intermediate, and studies with a rating of 4 were considered high in terms of degree of distance. Each study was rated by at least 2 reviewers to ensure consistency.

Study Selection

The initial search of studies published between January 2008 and December 2018 yielded 3,954 articles. After duplicates were removed, 3,245 titles and abstracts were screened and a further 3,146 articles were excluded. A total of 99 articles were selected for full-text screening, and 74 records were then excluded for reasons shown in Figure 1. A total of 25 articles initially met the inclusion criteria and were kept for the review. An updated search was conducted in June 2020 to include recent articles, yielding an additional 1,600 articles. After duplicates were removed, 1,121 titles and abstracts were screened and a further 1,094 articles were excluded. A total of 27 articles were selected for full-text screening, and 26 records were then excluded for reasons shown in Figure 2. Only 1 article met the inclusion criteria and was retained, resulting in an overall total of 26 articles included in this review.

Fig. 1.

Initial search (January 2008–December 2018).

Fig. 1.

Initial search (January 2008–December 2018).

Close modal
Fig. 2.

Updated search (December 2018–June 2020).

Fig. 2.

Updated search (December 2018–June 2020).

Close modal

Study Characteristics

Study and participant characteristics are reported in Table 1. The mean age of participants across studies ranged from 57 to 83 years, and the mean education ranged from 11 to 16 years. Not all studies reported education level for participants, and 1 study described participants with <12 years of education but did not provide an estimate of years. The number of participants across studies was highly variable, ranging from 8 to 8,627 total participants. The cognitive status of participants recruited for each of the studies was also variable, ranging from cognitively normal to MCI to AD/dementia. One study examined patients with recent TIA or stroke. The assessment tools included 11 screening tests (e.g., Montreal Cognitive Assessment), 10 brief batteries (e.g., Computer Assessment of MCI), and 5 full neuropsychological batteries (e.g., Uniform Data Set battery).

Table 1.

Study characteristics and participant demographics [32, 38, 39, 41-46, 50, 52-54, 59-71]

Study characteristics and participant demographics [32, 38, 39, 41-46, 50, 52-54, 59-71]
Study characteristics and participant demographics [32, 38, 39, 41-46, 50, 52-54, 59-71]

Reliability and validity estimates are reported in Table 2. Validity measures included correlations with a comparator (standard screen, neuropsychological tests, and paper-and-pencil version) and diagnostic accuracy for MCI or AD/dementia. Convergent validity estimates were highly variable, ranging from r = 0.41 to 0.80 (absolute values). Criterion validity estimates against clinical diagnoses were also variable, ranging from 60 to 99% for sensitivity and 59 to 100% for specificity. Reliability measures were less frequent and included highly variable test-retest estimates (range r = 0.19–0.81), interrater coefficients (range 0.42–0.88), weighted kappa analyses (range 0.90–1.00), and internal consistency estimates (range α 0.73–0.93).

Table 2.

Validity and reliability data

Validity and reliability data
Validity and reliability data

Age and Cognitive Status

Expected age differences supportive of validity were reported in several studies. Older age was associated with lower performance on web-based computerized batteries, and this did not appear solely due to low computer familiarity [39, 40]. However, older age was also associated with a somewhat higher likelihood of producing invalid data [40] and with asking staff questions before proceeding with a self-administered, tablet-based battery [41]. Since no study compared psychometric properties between age groups, we evaluated these across studies in relation to mean reported age <75 years (16 studies) and 75 years and above (10 studies). The range of convergent validity estimates was comparable for younger and older groups (r = 0.52–0.78 and 0.56–0.77, respectively). Sensitivity and specificity ranges were also comparable (74–100% and 59–97% in the younger and 68–99% and 61–95% in the older group). There were an insufficient number of comparable reliability estimates in the 2 age groups. It must be said that with 2 exceptions [42, 43], studies with older participants implemented low distance assessments, for example, computer-based testing in clinic but not at home [40].

Over half of the studies (14 or 54%) included participants with different levels of cognitive functioning (normal, MCI, and mild AD/dementia). Overall, the discrimination between normal and impaired cognitive functioning was quite accurate, with sensitivity and specificity ranging 60–100% and 69–100%, respectively. Participants with impaired cognition provided a higher proportion of data not meeting integrity criteria [40] had more difficulty with self-administration and received more caregiver assistance [44]. No study compared directly the psychometric properties of assessments for normal and cognitively impaired participants. Convergent validity estimate ranges were similar for studies including and not including cognitively impaired participants (r = 0.46–0.80 vs. 0.41–0.75). Reliability estimates also did not seem to differ, with comparable ICCs, internal consistency, and test-retest estimates. Only 2 studies with cognitively impaired participants implemented intermediate and high distance assessments and encountered no difficulties with videoconferencing [43] and online computer-based interface [42]. However, the latter included questions for both patient and informant [42].

Degree of Distance

As shown in Table 2, 10 studies (38%) were assigned a rating of low distance, 11 (42%) a rating of intermediate distance, and 5 (19%) a rating of high distance. Less than half of the studies (46%) employed self-administered assessments, whereas the other 54% used examiner-administered tools. With respect to location, 14 studies (54%) investigated assessments administered at participants’ homes, whereas the remainder (n = 12, 46%) examined assessments conducted at a clinic, hospital, or medical facility. The majority of studies (n = 18, 69%) used assessments that were administered when participants were alone, while the remainder (n = 8, 31%) were observed during testing. Direct comparisons of performance at different levels of distance were presented in only 2 studies and did not reveal disadvantages at higher distance levels [40, 45]. No study compared psychometric properties at different distance levels. When comparing these across studies, convergent validity coefficients for low distance studies ranged from r = 0.52 to 0.80, for intermediate distance studies from 0.49 to 0.75, and for high distance studies from 0.41 to 0.53. Sensitivity and specificity estimates in low distance studies ranged 60–89% and 73–100%, in intermediate distance studies, 71–99% and 59–95%, and in a single high distance study, 68 and 67%, respectively. Reliability estimates, where available, appeared to decrease with increasing distance. Test-retest estimates ranged from r= 0.63 to 0.77 for low, 0.42 for intermediate, and 0.19–0.81 for high distance studies. Similarly, agreement between raters on items also seemed to decrease with increasing distance (weighted kappa ranged from 0.91 to 1.00 for low, 0.50–0.52 for intermediate, and 0.24–0.99 for high distance studies.

Assessment Interface

We found that 9 studies (35%) used a conventional telephone, 13 studies (50%) used computer technology (PC, iPad/tablet, and web-based), 3 studies (11%) made use of videoconferencing, and 1 study (4%) used smartphones (see Table 1). No study compared psychometric properties of different interfaces. Studies using conventional telephone reported good-to-excellent validity estimates (r = 0.62 for convergent validity and 73–97% and 59–87% for sensitivity and specificity). Reliability was difficult to assess because it was reported only in 3 studies (internal consistency α 0.85 and weighted kappa 0.24–0.99). Studies using computer technology also reported good-to-excellent convergent (r = 0.41–0.75) and criterion validity (sensitivity 60–99% and specificity 67–100%). Internal consistency estimates were >0.70 and weighted kappa estimates ranged from 0.91 to 1.00, whereas test-retest reliability was quite variable (0.19–0.81). Studies using videoconferencing reported moderate-to-high agreement between face-to-face assessments and diagnoses (ICCs 0.42–0.90, weighted kappa = 0.52) [43, 45, 46].

Our systematic review identified a robust body of studies that examined validity and/or reliability of cognitive assessments with older adults using distance methods. These involved minimal to substantial alterations from standard in-clinic assessments, including novel technology, supervised and unsupervised self-administration of cognitive tasks, and test-taking in the home environment. The majority of studies presented psychometric data describing validity. These included correlations with traditional comparator tests (convergent validity evidence) and sensitivity and specificity of the assessments for cognitive impairment states including MCI and AD/dementia (criterion validity evidence). Reliability data were reported far more spottily. They included test-retest stability, intraclass correlations, weighted kappa, and internal consistency. In aggregate, the psychometric soundness of distance assessments appears supported. Several caveats emerged as well. Older test-takers and those with cognitive impairment were more likely to produce invalid data when performing computerized tasks, and these were excluded from psychometric estimates. High levels of distance may result in a loss of validity and reliability. The fidelity of assessments using challenging interfaces (e.g., video-teleconference procedures) did not always appear optimal compared to face-to-face assessments.

To interpret the psychometric estimates summarized in this review in context, we consulted published sensitivity and specificity data for standard cognitive assessments in current clinical use. Screening tests recommended for early detection of AD/dementia in primary care include the GPCOG, Mini-Cog, and Memory Impairment Screen (MIS), all found to have high validity (>80% for both sensitivity and specificity) [34]. Even higher sensitivity and specificity (≥85%) were achieved by neuropsychological batteries, for example, tests that are part of the National Alzheimer’s Coordinating Center Uniform Data Set (UDS) [47, 48]. Accuracy estimates for the detection of milder impairment, for example, MCI, are lower for UDS and other neuropsychological tests (≥70%) [47, 49]. Accordingly, sensitivity and/or specificity estimates were adequate (≥70%) for most studies included in this review regardless of distance level. There were only 3 studies reporting estimates that fell below this level. It appears that this was due more to the type of assessment than the level of distance. The computerized version of the Placing Test had better accuracy than the paper-and-pencil version [50]. The verbal memory test part of the online Dementia Risk Assessment was new and not validated [42]. It is very difficult to interpret reliability estimates because they were inconsistently reported and differed across studies. We note that several ICCs, alpha coefficients, and test-retest correlations were less than adequate, raising questions about the reliability of distance assessments.

Previous research has highlighted potential validity threats to cognitive assessments using computer technology. For example, participants with lower computer familiarity showed poorer performance on speeded tasks requiring complex attention, raising the possibility that the interaction with the unfamiliar interface demands cognitive resources [37]. Similarly, 1 study in the current review reported lower scores on computerized compared to traditional processing speed tasks and suggested that such differences may be due to participants having to switch between looking at keyboard and the current stimulus [39]. Spatial memory scores were also lower on computerized than traditional tests, possibly due to manipulating stimuli using the mouse (drag-and-drop) [39]. However, the impact of the computer interface may also be beneficial. Verbal memory performance was higher on computerized than traditional tests, and this could be due to participants having to type word stimuli compared to saying them out loud. The use of different modalities (listening to words and typing them) may have strengthened encoding processes [39]. These findings limit the comparability of scores on computerized and traditional tests and highlight that a computerized test becomes a new and different task whose relationship to the original test must be empirically determined [51].

A multitude of factors may threaten the validity of home-based assessments, particularly entirely unsupervised ones (i.e., not via phone). Only 4 studies had older adults self-administer computerized tests at home, with no staff present [40, 52-54]. Where comparisons were possible, no performance differences compared to self-administration in clinic or traditional assessments were reported [40, 52]. No clear evidence of other problems including failure to understand task instructions emerged [53, 54]. Yet, uncontrolled conditions in the home, including noise, distractions, and interruptions, may detract from validity and reliability. In our own research, we found that interruptions can adversely affect cognitive task performance in lab environment [55]. No study has systematically examined the impact and possible mitigation of unanticipated home-related conditions on test performance. For this reason, psychometric data from such assessments should be interpreted with caution.

It is interesting that studies empirically determining the feasibility of their assessments prior to evaluating psychometric soundness were fairly rare [39, 40, 52, 53]. While work went into developing usable interfaces with clear instructions, information on the difficulties that test-takers encounter was sparse. A higher likelihood of invalid or missing responses and requests for assistance was reported for older age groups and those with cognitive impairment [40, 41]. Tests with high proportions of missing data (>25%), for example, a tower test, were excluded from analyses [39]. Feasibility was not examined in individuals with limited motor abilities and may be particularly relevant when participants use small screens and/or response inputs such as the mouse or stylus. Because of invalid performance data and generally data lacking integrity, more likely among older and cognitively impaired individuals who were excluded from the psychometric estimates, a reasonable conclusion is that high distance assessments may be more suitable for younger, cognitively intact participants, who have some familiarity with the technology that is being used.

Several studies assessed the experience older test-takers had with various types of distance assessments. Many older adults appeared to prefer computerized assessments to traditional ones [39]. When comparing their experiences of PC and iPad, participants preferred the latter and thought that they did better on the test [40]. Those completing assessments on their own at home valued being free to choose the time of testing, usually mid-morning when they felt freshest [40]. This is consistent with a report from the stress literature showing that older adults experienced more stress and performed more poorly on memory tests at later than earlier times of the day [56]. It also accords with our own observations that many older adults value choosing the circumstances of the assessment. Over a third reported preferring completing the assessment at their home [57]. Distance options could have played a role in the implementation of person-centered practices with regard to cognitive assessment [57].

Limitations of Current Research

Our review revealed the methodological limitations of the current research on cognitive assessment at a distance. The first was a striking lack of uniformity regarding the assessments used across studies. For example, among studies administering computerized assessments, only 2 used the same battery, CAMCI [40, 41]. All others used different measures with highly variable domain coverage. Comparators for gauging convergent validity were also heterogeneous. Telephone assessments were somewhat more similar across studies, with TICS, MoCA, and MCAS used at least in 2 studies. Few studies undertook comparisons of the same assessment at different distance levels, for example, videoconferencing versus face-to-face [43, 45], computerized versus paper [50], PC versus tablet, and clinic versus home [40]. This lack of measurement uniformity precluded the use of meta-analytic methods in this systematic review, to present pooled psychometric estimates and reach definitive conclusions regarding the psychometric soundness of assessments at various distances.

A second limitation in the existing literature was a thorough investigation of feasibility and usability of the specific distance assessment, prior to finalizing the procedures and collecting psychometric data. Findings to date indicate that age and cognitive status of the test-taker need to be carefully considered to maintain the psychometric integrity of the assessment, but there was no systematic investigation of these test-taker characteristics on validity and reliability. A final limitation was the lack of data on older adults’ experiences with cognitive assessment at a distance. The information to date is sparse and does not help with evaluating the potential that assessment at a distance may hold in reducing the stress and embarrassment that is often reported by older adults during a cognitive assessment [23, 58]. Physician attitudes, identified in the literature as an important barrier to the detection of cognitive impairment [19], have not been examined in any of the studies included here.

Based on these limitations, we recommend using common cognitive measures in future research and comparing the psychometric characteristics of these assessments at well-specified levels of distance. We also stress the importance of collecting comprehensive data on the type of feasibility barriers that test-takers might face at higher distance levels. Finally, to balance costs and benefits of distance cognitive assessments, more attention should be devoted to the preferences and experiences of older test-takers and their healthcare providers.

This systematic review also has limitations. We did not conduct an evaluation of the methodological quality of individual studies because of the multitude of approaches and designs. We summarized the evidence in terms of ranges, which may be misleading with respect to average and variability of the various coefficients reported here. Finally, we acknowledge the potential for publication bias, where studies finding poor validity and/or reliability of distance assessments may be under- or nonreported, but we were not able to conduct a failsafe N to compensate for nonpublished findings.

Early detection of cognitive impairment is deemed crucial; however, there are many patient barriers making this process difficult. Remote or distance cognitive assessments, administered via phone, videoconferencing, or computer platforms, have emerged as possible alternatives to traditional cognitive testing within primary care and clinical settings. The psychometric data presented in this review in principle support the validity and reliability of such approaches but also caution that higher distance may come at the cost of weakened psychometric soundness. Specifically, it is possible that self-administration of cognitive assessments at home, without an examiner present at least remotely, may not consistently produce valid and reliable results. Future research on distance cognitive assessments should delineate the boundaries for valid and reliable test results.

The authors state that all research was conducted ethically and in accordance with the World Medical Association Declaration of Helsinki. All data were collected from past published research articles and did not directly involve any human subjects.

The authors have no conflicts of interest to declare.

This study was not funded.

D.B. conducted background research, created search terms, reviewed and systematically selected articles to include, analyzed data, and wrote the manuscript. M.S. reviewed and systematically selected articles to include, analyzed data, and contributed to writing the manuscript. C.J. acted as the corresponding author and conducted background research, analyzed data, and contributed to writing the manuscript.

1.
Jack
CR
 Jr
,
Knopman
DS
,
Jagust
WJ
,
Shaw
LM
,
Aisen
PS
,
Weiner
MW
, et al
Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade
.
Lancet Neurol
.
2010
;
9
(
1
):
119
28
. .
2.
Jacova
C
,
Peters
KR
,
Beattie
BL
,
Wong
E
,
Riddehough
A
,
Foti
D
, et al
Cognitive impairment no dementia: neuropsychological and neuroimaging characterization of an amnestic subgroup
.
Dement Geriatr Cogn Disord
.
2008
;
25
(
3
):
238
47
. .
3.
Albert
MS
,
DeKosky
ST
,
Dickson
D
,
Dubois
B
,
Feldman
HH
,
Fox
NC
, et al
The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease
.
Alzheimers Dement
.
2011
;
7
(
3
):
270
9
.
4.
Petersen
RC
,
Lopez
O
,
Armstrong
MJ
,
Getchius
TSD
,
Ganguli
M
,
Gloss
D
, et al
Practice guideline update summary: mild cognitive impairment: report of the guideline development, dissemination, and implementation subcommittee of the American Academy of Neurology
.
Neurology
.
2018
;
90
(
3
):
126
35
. .
5.
Langa
KM
,
Larson
EB
,
Crimmins
EM
,
Faul
JD
,
Levine
DA
,
Kabeto
MU
, et al
A comparison of the prevalence of dementia in the United States in 2000 and 2012
.
JAMA Intern Med
.
2017
;
177
(
1
):
51
8
. .
6.
Jessen
F
,
Amariglio
RE
,
Van Boxtel
M
,
Breteler
M
,
Ceccaldi
M
,
Chételat
G
, et al
A conceptual framework for research on subjective cognitive decline in preclinical Alzheimer’s disease
.
Alzheimers Dement
.
2014
;
10
(
6
):
844
52
. .
7.
Sperling
RA
,
Aisen
PS
,
Beckett
LA
,
Bennett
DA
,
Craft
S
,
Fagan
AM
, et al
Toward defining the preclinical stages of Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease
.
Alzheimers Dement
.
2011
;
7
(
3
):
280
92
.
8.
Mitchell
A
,
Beaumont
H
,
Ferguson
D
,
Yadegarfar
M
,
Stubbs
B
.
Risk of dementia and mild cognitive impairment in older people with subjective memory complaints: meta-analysis
.
Acta Psychiatr Scand
.
2014
;
130
(
6
):
439
51
.
9.
Sachdev
PS
,
Brodaty
H
,
Reppermund
S
,
Kochan
NA
,
Trollor
JN
,
Draper
B
, et al
The Sydney Memory and Ageing Study (MAS): methodology and baseline medical and neuropsychiatric characteristics of an elderly epidemiological non-demented cohort of Australians aged 70–90 years
.
Int Psychogeriatr
.
2010
;
22
(
8
):
1248
64
. ..
10.
Olivari
BS
,
Baumgart
M
,
Lock
SL
,
Whiting
CG
,
Taylor
CA
,
Iskander
J
, et al
CDC grand rounds: promoting well-being and independence in older adults
.
MMWR Morb Mortal Wkly Rep
.
2018
;
67
(
37
):
1036
. .
11.
Dubois
B
,
Padovani
A
,
Scheltens
P
,
Rossi
A
,
Dell’Agnello
G
.
Timely diagnosis for Alzheimer’s disease: a literature review on benefits and challenges
.
J Alzheimers Dis
.
2016
;
49
(
3
):
617
31
.
12.
McKhann
GM
,
Albert
MS
,
Sperling
RA
.
Changing diagnostic concepts of Alzheimer’s disease
. In:
Hampel
H
,
Carrillo
MC
, editors.
Alzheimer’s disease-modernizing concept, biological diagnosis and therapy
.
Basel, Switzerland
:
Karger Publishers
;
2012
. Vol.
28
; p.
115
21
.
13.
Sloane
PD
,
Zimmerman
S
,
Suchindran
C
,
Reed
P
,
Wang
L
,
Boustani
M
, et al
The public health impact of Alzheimer’s disease, 2000–2050: potential implication of treatment advances
.
Annu Rev Public Health
.
2002
;
23
:
213
31
. ..
14.
Weimer
DL
,
Sager
MA
.
Early identification and treatment of Alzheimer’s disease: social and fiscal outcomes
.
Alzheimers Dement
.
2009
;
5
(
3
):
215
26
. ..
15.
Bunn
F
,
Goodman
C
,
Sworn
K
,
Rait
G
,
Brayne
C
,
Robinson
L
, et al
Psychosocial factors that shape patient and carer experiences of dementia diagnosis and treatment: a systematic review of qualitative studies
.
PLoS Med
.
2012
;
9
(
10
):
e1001331
. ..
16.
Iliffe
S
,
Manthorpe
J
.
The recognition of and response to dementia in the community: lessons for professional development
.
Learn Health Soc Care
.
2004
;
3
(
1
):
5
16
. .
17.
Moyer
VA
.
Screening for cognitive impairment in older adults: US Preventive Services Task Force recommendation statement
.
Ann Intern Med
.
2014
;
160
(
11
):
791
7
.
18.
Borson
S
,
Frank
L
,
Bayley
PJ
,
Boustani
M
,
Dean
M
,
Lin
PJ
, et al
Improving dementia care: the role of screening and detection of cognitive impairment
.
Alzheimers Dement
.
2013
;
9
(
2
):
151
9
. .
19.
Bradford
A
,
Kunik
ME
,
Schulz
P
,
Williams
SP
,
Singh
H
.
Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors
.
Alzheimer Dis Assoc Disord
.
2009
;
23
(
4
):
306
14
. ..
20.
Hanzevacki
M
,
Ozegovic
G
,
Simovic
I
,
Bajic
Z
.
Proactive approach in detecting elderly subjects with cognitive decline in general practitioners’ practices
.
Dement Geriatr Cogn Dis Extra
.
2011
;
1
(
1
):
93
102
. .
21.
Boustani
M
,
Perkins
AJ
,
Fox
C
,
Unverzagt
F
,
Austrom
MG
,
Fultz
B
, et al
Who refuses the diagnostic assessment for dementia in primary care?
Int J Geriatr Psychiatry
.
2006
;
21
(
6
):
556
63
. ..
22.
Fowler
NR
,
Frame
A
,
Perkins
AJ
,
Gao
S
,
Watson
DP
,
Monahan
P
, et al
Traits of patients who screen positive for dementia and refuse diagnostic assessment
.
Alzheimers Dement
.
2015
;
1
(
2
):
236
41
. .
23.
Fowler
NR
,
Perkins
AJ
,
Turchan
HA
,
Frame
A
,
Monahan
P
,
Gao
S
, et al
Older primary care patients’ attitudes and willingness to screen for dementia
.
J Aging Res
.
2015
;
2015
:
423265
. ..
24.
Allard
M
,
Husky
M
,
Catheline
G
,
Pelletier
A
,
Dilharreguy
B
,
Amieva
H
, et al
Mobile technologies in the early detection of cognitive decline
.
PLoS One
.
2014
;
9
(
12
):
e112197
. ..
25.
Martin-Khan
M
,
Flicker
L
,
Wootton
R
,
Loh
PK
,
Edwards
H
,
Varghese
P
, et al
The diagnostic accuracy of telegeriatrics for the diagnosis of dementia via video conferencing
.
J Am Med Dir Assoc
.
2012
;
13
(
5
):
487
24
. .
26.
Cullum
C
,
Hynan
L
,
Grosch
M
,
Parikh
M
,
Weiner
M
.
Teleneuropsychology: evidence for video teleconference-based neuropsychological assessment
.
J Int Neuropsychological Soc
.
2014
;
20
(
10
):
1028
33
.
27.
Anderson
M
,
Perrin
A
.
Technology use among seniors
.
Washington, DC
:
Pew Research Center for Internet & Technology
;
2017
.
28.
Harrell
KM
,
Wilkins
SS
,
Connor
MK
,
Chodosh
J
.
Telemedicine and the evaluation of cognitive impairment: the additive value of neuropsychological assessment
.
J Am Med Dir Assoc
.
2014
;
15
(
8
):
600
6
. ..
29.
Koo
BM
,
Vizer
LM
.
Mobile technology for cognitive assessment of older adults: a scoping review
.
Innov Aging
.
2019
;
3
(
1
):
igy038
. .
30.
Hassenstab
J
,
Aschenbrenner
AJ
,
Balota
DA
,
McDade
E
,
Lim
Y
,
Fagan
AM
, et al
O1-04-03: comparing smartphone-administered cognitive assessments with conventional tests and biomarkers in sporadic and dominantly inherited Alzheimer disease
.
Alzheimers Dement
.
2018
;
14
(
7S_Part_4
):
P224
5
. .
31.
Lathan
C
,
Wallace
AS
,
Shewbridge
R
,
Ng
N
,
Morrison
G
,
Resnick
HE
.
Cognitive health assessment and establishment of a virtual cohort of dementia caregivers
.
Dement Geriatr Cogn Dis Extra
.
2016
;
6
(
1
):
98
107
. .
32.
Rentz
DM
,
Dekhtyar
M
,
Sherman
J
,
Burnham
S
,
Blacker
D
,
Aghjayan
SL
, et al
The feasibility of at-home iPad cognitive testing for use in clinical trials
.
J Prev Alzheimers Dis
.
2016
;
3
(
1
):
8
. .
33.
Sano
M
,
Egelko
S
,
Ferris
S
,
Kaye
J
,
Hayes
TL
,
Mundt
JC
, et al
Pilot study to show the feasibility of a multicenter trial of home-based assessment of people over 75 years old
.
Alzheimer Dis Assoc Disord
.
2010
;
24
(
3
):
256
63
. .
34.
Cordell
CB
,
Borson
S
,
Boustani
M
,
Chodosh
J
,
Reuben
D
,
Verghese
J
, et al
Alzheimer’s association recommendations for operationalizing the detection of cognitive impairment during the Medicare Annual Wellness Visit in a primary care setting
.
Alzheimers Dement
.
2013
;
9
(
2
):
141
50
. .
35.
Shore
JH
,
Yellowlees
P
,
Caudill
R
,
Johnston
B
,
Turvey
C
,
Mishkind
M
, et al
Best practices in videoconferencing-based telemental health April 2018
.
Telemed J E Health
.
2018
;
24
(
11
):
827
32
. .
36.
Barth
J
,
Nickel
F
,
Kolominsky-Rabas
PL
.
Diagnosis of cognitive decline and dementia in rural areas: a scoping review
.
Int J Geriatr Psychiatry
.
2018
;
33
(
3
):
459
74
. ..
37.
Jacova
C
,
McGrenere
J
,
Lee
HS
,
Wang
WW
,
Huray
SL
,
Corenblith
EF
, et al
C-TOC (cognitive testing on computer) investigating the usability and validity of a novel self-administered cognitive assessment tool in aging and early dementia
.
Alzheimer Dis Assoc Disord
.
2015
;
29
(
3
):
213
21
.
38.
Mielke
MM
,
Machulda
MM
,
Hagen
CE
,
Edwards
KK
,
Roberts
RO
,
Pankratz
V
, et al
Performance of the CogState computerized battery in the Mayo Clinic Study on Aging
.
Alzheimers Dement
.
2015
;
11
(
11
):
1367
76
.
39.
Hansen
TI
,
Haferstrom
EC
,
Brunner
JF
,
Lehn
H
,
Håberg
AK
.
Initial validation of a web-based self-administered neuropsychological test battery for older adults and seniors
.
J Clin Exp Neuropsychol
.
2015
;
37
(
6
):
581
94
. .
40.
Mielke
MM
,
Machulda
MM
,
Hagen
CE
,
Edwards
KK
,
Roberts
RO
,
Pankratz
VS
, et al
Performance of the CogState computerized battery in the Mayo Clinic Study on Aging
.
Alzheimers Dement
.
2015
;
11
(
11
):
1367
76
. .
41.
Tierney
MC
,
Naglie
G
,
Upshur
R
,
Moineddin
R
,
Charles
J
,
Jaakkimainen
RL
.
Feasibility and validity of the self-administered computerized assessment of mild cognitive impairment with older primary care patients
.
Alzheimer Dis Assoc Disord
.
2014
;
28
(
4
):
311
9
. ..
42.
Brandt
J
,
Sullivan
C
,
Burrell
LE
 II
,
Rogerson
M
,
Anderson
A
.
Internet-based screening for dementia risk
.
PLoS One
.
2013
;
8
(
2
):
e57476
. .
43.
Grosch
MC
,
Weiner
MF
,
Hynan
LS
,
Shore
J
,
Cullum
CM
.
Video teleconference-based neurocognitive screening in geropsychiatry
.
Psychiatry Res
.
2015
;
225
(
3
):
734
5
. ..
44.
Dougherty
JH
 Jr
,
Cannon
RL
,
Nicholas
CR
,
Hall
L
,
Hare
F
,
Carr
E
, et al
The computerized self test (CST): an interactive, internet accessible cognitive screening test for dementia
.
J Alzheimers Dis
.
2010
;
20
(
1
):
185
95
. .
45.
Galusha-Glasscock
JM
,
Horton
DK
,
Weiner
MF
,
Cullum
CM
.
Video teleconference administration of the repeatable battery for the assessment of neuropsychological status
.
Arch Clin Neuropsychol
.
2016
;
31
(
1
):
8
11
. .
46.
Martin-Khan
M
,
Flicker
L
,
Wootton
R
,
Loh
PK
,
Edwards
H
,
Varghese
P
, et al
The diagnostic accuracy of telegeriatrics for the diagnosis of dementia via video conferencing
.
J Am Med Dir Assoc
.
2012
;
13
(
5
):
487
24
. .
47.
Dubois
B
,
Feldman
HH
,
Jacova
C
,
DeKosky
ST
,
Barberger-Gateau
P
,
Cummings
J
, et al
Research criteria for the diagnosis of Alzheimer’s disease: revising the NINCDS-ADRDA criteria
.
Lancet Neurol
.
2007
;
6
(
8
):
734
46
.
48.
Weintraub
S
,
Salmon
D
,
Mercaldo
N
,
Ferris
S
,
Graff-Radford
NR
,
Chui
H
, et al
The Alzheimer’s disease centers’ uniform data set (UDS): the neuropsychological test battery
.
Alzheimer Dis Assoc Disord
.
2009
;
23
(
2
):
91
.
49.
de Jager
CA
,
Budge
MM
,
Clarke
R
.
Utility of TICS-M for the assessment of cognitive function in older adults
.
Int J Geriatr Psychiatry
.
2003
;
18
(
4
):
318
24
. ..
50.
Vacante
M
,
Wilcock
GK
,
de Jager
CA
.
Computerized adaptation of the Placing Test for early detection of both mild cognitive impairment and Alzheimer’s disease
.
J Clin Exp Neuropsychol
.
2013
;
35
(
8
):
846
56
. .
51.
Bauer
RM
,
Iverson
GL
,
Cernich
AN
,
Binder
LM
,
Ruff
RM
,
Naugle
RI
.
Computerized neuropsychological assessment devices: joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology
.
Clin Neuropsychol
.
2012
;
26
(
2
):
177
96
. .
52.
Brown
LJE
,
Adlam
T
,
Hwang
F
,
Khadra
H
,
Maclean
LM
,
Rudd
B
, et al
Computer-based tools for assessing micro-longitudinal patterns of cognitive function in older adults
.
Age
.
2016
;
38
(
4
):
335
50
. .
53.
Trustram Eve
C
,
de Jager
CA
.
Piloting and validation of a novel self-administered online cognitive screening tool in normal older persons: the cognitive function test
.
Int J Geriatr Psychiatry
.
2014
;
29
(
2
):
198
206
. .
54.
Wesnes
KA
,
Brooker
H
,
Ballard
C
,
McCambridge
L
,
Stenton
R
,
Corbett
A
.
Utility, reliability, sensitivity and validity of an online test system designed to monitor changes in cognitive function in clinical trials
.
Int J Geriatr Psychiatry
.
2017
;
32
(
12
):
e83
92
. .
55.
Brehmer
M
,
McGrenere
J
,
Tang
C
,
Jacova
C
, editors.
Investigating interruptions in the context of computerised cognitive testing for older adults
.
Proceedings of the SIGCHI conference on human factors in computing systems
.
2012
.
56.
Sindi
S
,
Fiocco
AJ
,
Juster
RP
,
Pruessner
J
,
Lupien
SJ
.
When we test, do we stress? Impact of the testing environment on cortisol secretion and memory performance in older adults
.
Psychoneuroendocrinology
.
2013
;
38
(
8
):
1388
96
. .
57.
Wong
S
,
Jacova
C
.
Older adults’ attitudes towards cognitive testing: moving towards person-centeredness
.
Dement Geriatr Cogn Disorders Extra
.
2018
;
8
(
3
):
348
59
.
58.
Krohne
K
,
Slettebø
A
,
Bergland
A
.
Cognitive screening tests as experienced by older hospitalised patients: a qualitative study
.
Scand J Caring Sci
.
2011
;
25
(
4
):
679
87
. .
59.
Ahmed
S
,
De Jager
C
,
Wilcock
G
.
A comparison of screening tools for the assessment of mild cognitive impairment: Preliminary findings
.
Neurocase
.
2012
;
18
(
4
):
336
351
.
60.
Brouillette
RM
,
Foil
H
,
Fontenot
S
,
Correro
A
,
Allen
R
,
Martin
CK
, et al
Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly
.
PLoS One
.
2013
;
8
(
6
). .
61.
Cook
SE
,
Marsiske
M
,
McCoy
KJM
.
The Use of the Modified Telephone Interview for Cognitive Status (TICS-M) in the Detection of Amnestic Mild Cognitive Impairment
.
J Geriatr Psychiatry Neurol
.
2009
;
22
(
2
):
103
109
.
62.
Darcy
S
,
Rapcan
V
,
Gail
A
,
Burke
N
,
O’Connell
GC
,
Robertson
IH
, et al
A study into the automation of cognitive assessment tasks for delivery via the telephone: lessons for developing remote monitoring applications for the elderly
.
Technol Health Care
.
2013
;
21
(
4
):
387
396
.
63.
Duff
K
,
Tometich
D
,
Dennett
K
.
The modified Telephone Interview for Cognitive Status is more predictive of memory abilities than the Mini-Mental State Examination
.
J Geriatr Psychiatry Neurol
.
2015
;
28
(
3
):
193
197
.
64.
Kennedy
RE
,
Williams
CP
,
Sawyer
P
,
Allman
RA
,
Crowe
M
.
Comparison of in-person and telephone administration of the Mini-Mental State Examination in the University of Alabama at Birmingham Study of Aging
.
J Am Geriatr Soc
.
2014
;
62
(
10
):
1928
1932
.
65.
Knopman
DS
,
Roberts
RO
,
Geda
YE
,
Pankratz
VS
,
Christianson
TJ
,
Petersen
RC
, et al
Validation of the telephone interview for cognitive status-modified in subjects with normal cognition, mild cognitive impairment, or dementia
.
Neuroepidemiology
.
2010
;
34
(
1
):
34
42
.
66.
Pendlebury
ST
,
Welch
SJ
,
Cuthbertson
FC
,
Mariz
J
,
Mehta
Z
,
Rothwell
PM
, et al
Telephone assessment of cognition after transient ischemic attack and stroke: Modified telephone interview of cognitive status and telephone Montreal Cognitive Assessment versus face-to-face Montreal Cognitive Assessment and neuropsychological battery
.
Stroke
.
2013
;
44
(
1
):
227
229
.
67.
Pillemer
S
,
Papandonatos
GD
,
Crook
C
,
Ott
BR
,
Tremont
G
.
The modified telephone-administered Minnesota Cognitive Acuity Screen for mild cognitive impairment
.
J Geriatr Psychiatry Neurol
.
2018
;
31
(
3
):
123
128
.
68.
Reckess
GZ
,
Brandt
J
,
Luis
CA
,
Zandi
P
,
Martin
B
,
Breitner
JC
.
Screening by telephone in the Alzheimer’s disease anti-inflammatory prevention trial
.
J Alzheimers Dis
.
2013
;
36
(
3
):
433
443
.
69.
Saxton
J
,
Morrow
L
,
Eschman
A
,
Archer
G
,
Luther
J
,
Zuccolotto
A
.
Computer assessment of mild cognitive impairment
.
Postgrad Med
.
2009
;
121
(
2
):
177
185
.
70.
Tremont
G
,
Papandonatos
GD
,
Springate
B
,
Huminski
B
,
McQuiggan
MD
,
Grace
J
, et al
Use of the telephone-administered Minnesota cognitive acuity screen to detect mild cognitive impairment
.
Am J Alzheimers Dis Other Demen
.
2011
;
26
(
7
):
555
562
.
71.
Solomon
TM
,
Barbone
JM
,
Feaster
HT
,
Miller
DS
,
DeBros
GB
,
Murphy
CA
, et al
Comparing the Standard and Electronic Versions of the Alzheimer’s Disease Assessment Scale - Cognitive Subscale: A Validation Study
.
J Prev Alzheimers Dis
.
2019
;
6
(
4
):
237
241
.
Copyright / Drug Dosage / Disclaimer
Copyright: All rights reserved. No part of this publication may be translated into other languages, reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, microcopying, or by any information storage and retrieval system, without permission in writing from the publisher.
Drug Dosage: The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any changes in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug.
Disclaimer: The statements, opinions and data contained in this publication are solely those of the individual authors and contributors and not of the publishers and the editor(s). The appearance of advertisements or/and product references in the publication is not a warranty, endorsement, or approval of the products or services advertised or of their effectiveness, quality or safety. The publisher and the editor(s) disclaim responsibility for any injury to persons or property resulting from any ideas, methods, instructions or products referred to in the content or advertisements.