Abstract
Introduction: Timely detection of cognitive impairment such as mild cognitive impairment (MCI) or dementia is pivotal in initiating early interventions to improve patients’ quality of life. Conventional paper-pencil tests, though common, have limited sensitivity in detecting subtle cognitive changes. Computerized assessments offer promising alternatives, overcoming time and manual scoring constraints while potentially providing greater sensitivity. Methods: A literature search yielded 26 eligible articles (2020–2023). The articles were reviewed according to PRISMA guidelines, and the computerized tools were categorized by diagnostic outcome (MCI, dementia, combined). Results: The subjects included in the studies were aged 55–77 years. The overall gender distribution comprised 60% females and 40% males. The sample sizes varied considerably from 22 to 4,486. Convergent validity assessments in 20 studies demonstrated strong positive correlations with traditional tests. Overall classification accuracy in detecting MCI or dementia, distinguishing from normal cognition (NC), reached up to 91%. Impressively, 46% of the studies received high-quality ratings, underscoring the reliability and validity of the findings. Conclusion: The review highlights the advancements in computerized cognitive assessments for assessing MCI and dementia. This shift toward technology-based assessments could enhance detection capabilities and facilitate timely interventions for better patient outcomes.
Introduction
The prevalence of dementia is rising as the global population ages, yet it remains significantly underdiagnosed [1, 2]. A Johns Hopkins University study found that 39.5% of individuals over 65 years meeting the dementia criteria were undiagnosed [3]. Another study estimated that globally, 75% of individuals with dementia are undiagnosed [4]. Detecting dementia early is important, as the number of dementia cases is expected to rise significantly; according to the World Health Organization (WHO), projections are 78 million in the next decade and potentially soaring to 139 million by 2050 [5].
The underdiagnosis of dementia is mainly due to the complexities of clinical diagnosis, which involves time-consuming tests and documentation. Computerized cognitive assessments can help alleviate this issue by providing automated administration and real-time scoring, thereby reducing costs, minimizing errors, decreasing intra- or interexaminer variability, and offering interpretative reports and care recommendations. This could potentially lead to faster and easier clinical decision-making and improved patient outcomes [6, 7].
Current paper- and pencil-based screenings like the Mini-Mental State Examination (MMSE), the Clock Drawing Test, and the Mini-Cog are accurate in detecting dementia but less sensitive to early-stage cognitive changes and are time-consuming for staff. They are also only conducted once at a given clinical examination, with long periods to checkup exams [8]. New technology can mitigate these limitations [9]. Mobile devices enable repeated testing with alternate versions, reducing practice effects and enhancing accuracy and validity through longitudinal data [10]. This also allows for the creation of individualized cognitive trajectory profiles.
Detecting cognitive impairment is crucial for effective interventions that delay cognitive decline and improve quality of life. The widespread use of smartphones and tablets in modern society makes self-administered assessments feasible [11, 12], offering the potential for the earliest and easiest detection of progression from healthy aging to cognitive impairment. Mild cognitive impairment (MCI) marks a transitional stage between normal aging and dementia, and, in the case of amnestic MCI (aMCI) presents with episodic memory impairment and raises the risk of progressing to Alzheimer’s disease (AD) [13].
Web-based tools offer language options and self-administration, improving accessibility [10], especially in remote areas [14]. For instance, the neotiv platform offers a “remote digital memory composite score” comprising three nonverbal memory subtests to distinguish between healthy individuals and those with MCI [15]. Computerized tools can measure neurobehavioral patterns and incorporate artificial intelligence (AI), complementing traditional tests [16]. Most importantly, ecologically valid repeated tests can enable patients to take assessments during fatigue and when they are well rested, offering a more accurate reading of the effects of tiredness on cognitive performance.
Despite these benefits, digital cognitive assessments present challenges, including hard- and software variability, Internet quality, data protection concerns, and examinee-related factors like technology familiarity and anxiety [17]. A study found differences between traditional and digital assessments like MoCA (eMoCA), particularly in visuospatial/executive domains, for participants with less touchscreen experience [18]. Privacy concerns also arise from storing and sharing cognitive data, especially when collecting identifiable data, like handwriting or voice recordings [19]. Additionally, while many older adults have adopted digital technology, some still lack access or expertise, potentially excluding them from research.
A recent review highlighted 10 self-administered computerized cognitive measures proposed for clinical settings only, prompting an evaluation of their potential application in nonclinical settings [20]. This review aims to assess how technological advancements can meet evolving societal needs. Our systematic review focuses on self-administered and examiner-supported computerized cognitive assessments for older adults, investigating their validity and reliability in distinguishing between normal cognition (NC), MCI, and dementia in this demographic.
Methods
Search Strategy
From March 1 to May 31, 2023, a systematic literature search following Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [21]. PubMed, Scopus, and Web of Science were searched using terms such as “cognitive assessment,” “computerized,” “dementia,” “older adults,” and “MCI.” The detailed search string is in Supplementary Material 1 (SM1) (for all online suppl. material, see https://doi.org/10.1159/000541627). Only peer-reviewed articles in English published between 2020 and 2023 were considered, following specific inclusion and exclusion criteria outlined in Table 1.
Inclusion criteria |
|
Exclusion criteria |
|
Inclusion criteria |
|
Exclusion criteria |
|
After duplicate removal, 2,149 reports underwent screening for exclusion criteria based on titles and abstracts. Sixty-five articles were selected for full-text review to assess eligibility (Fig. 1). Of these records, 23 met inclusion criteria, with three additional records found through citation searching, totaling 26 articles in the review. The primary reasons for exclusion were studies including participants with comorbidities, absence of MCI or dementia diagnosis, and participants younger than 50 years.
Data Extraction
The research findings were managed using Mendeley Reference Manager (Desktop v1.19.8) and examined by two authors. Articles underwent title and abstract screening. During the full-text review, data extracted for each study included: (1) tool name and characteristics, (2) composition of the validation sample, (3) validation data and effect size, (4) standard neuropsychological tests used for comparison, and (5) study setting. Additional data, including scoring automation and system availability, were collected to assess tool quality.
Quality Assessment
A modified model based on Tsoy et al. (2021) was used for the quality assessment. Parameters included investigated domains, sample size, reliability, validity, automation level, administration types, marketplace availability, scoring/reporting facilities, language options, data security, and feasibility studies (online suppl. Tables 1, 2). Studies rated three or higher in at least four areas were considered to be of high quality.
Results
The 26 studies reviewed were categorized into three groups: MCI and subtypes, (preclinical) dementia, and both. Group 1 included 11 studies using nine cognitive tools, group 2 had six studies with various tools, and group 3 comprised ten distinct computerized cognitive assessments (see Table 2).
Intended diagnoses groups . | Computerized cognitive tool . | Studies . |
---|---|---|
(1) MCI and subtypes (SMC, aMCI, eoMCI) | CBB | Alden et al. [22] (2021), Kairys et al. [23] (2022) |
NIHTB-CB | Kairys et al. [23] (2022) | |
CANTAB | Campos-Magdaleno et al. [24] (2020) | |
BHA | Paterson et al. [25] (2021) | |
FACEmemory® | Alegret et al. [26] (2022) | |
BOCA | Vyshedskiy et al. [27] (2022) | |
SMART | Dorociak et al. [28] (2021) | |
VST | Iliadou et al. [29] (2021), Zygouris et al. [30] (2020) | |
CAKe | Fritch et al. [31] (2023) | |
(2) (Preclinical) dementia: AD, VaD | C3 | Papp et al. [32] (2021) |
dCDT | Davoudi et al. [33] (2021) | |
IDEA cognitive screen | Paddick et al. [34] (2021) | |
SLS | Stricker et al. [35] (2022) | |
VST, SHQ | Coughlan et al. [36] (2020) | |
BRANCH | Papp et al. [37] (2021) | |
(3) Both (1) and (2) | dCDT | Binaco et al. [38] (2020) |
C-ABC | Noguchi-Shinohara et al. [39] (2020) | |
BHA | Rodríguez-Salgado et al. [40] (2021) | |
SAGE | Scharre et al. [41] (2021) | |
BrainCheck | Ye et al. [42] (2022) | |
EC-Screen | Chan et al. [43] (2020) | |
NNCT | Oliva and Losa, [44] (2022) | |
COST-A | Visser et al. [45] (2021) | |
RSVP | Perez-Valero et al. [46] (2023) | |
ICA | Kalafatis et al. [47] (2021) |
Intended diagnoses groups . | Computerized cognitive tool . | Studies . |
---|---|---|
(1) MCI and subtypes (SMC, aMCI, eoMCI) | CBB | Alden et al. [22] (2021), Kairys et al. [23] (2022) |
NIHTB-CB | Kairys et al. [23] (2022) | |
CANTAB | Campos-Magdaleno et al. [24] (2020) | |
BHA | Paterson et al. [25] (2021) | |
FACEmemory® | Alegret et al. [26] (2022) | |
BOCA | Vyshedskiy et al. [27] (2022) | |
SMART | Dorociak et al. [28] (2021) | |
VST | Iliadou et al. [29] (2021), Zygouris et al. [30] (2020) | |
CAKe | Fritch et al. [31] (2023) | |
(2) (Preclinical) dementia: AD, VaD | C3 | Papp et al. [32] (2021) |
dCDT | Davoudi et al. [33] (2021) | |
IDEA cognitive screen | Paddick et al. [34] (2021) | |
SLS | Stricker et al. [35] (2022) | |
VST, SHQ | Coughlan et al. [36] (2020) | |
BRANCH | Papp et al. [37] (2021) | |
(3) Both (1) and (2) | dCDT | Binaco et al. [38] (2020) |
C-ABC | Noguchi-Shinohara et al. [39] (2020) | |
BHA | Rodríguez-Salgado et al. [40] (2021) | |
SAGE | Scharre et al. [41] (2021) | |
BrainCheck | Ye et al. [42] (2022) | |
EC-Screen | Chan et al. [43] (2020) | |
NNCT | Oliva and Losa, [44] (2022) | |
COST-A | Visser et al. [45] (2021) | |
RSVP | Perez-Valero et al. [46] (2023) | |
ICA | Kalafatis et al. [47] (2021) |
eoMCI, early-onset MCI; SMC, subjective memory complaints; VaD, vascular dementia.
Characteristics of Included Studies
The characteristics of the 26 studies are outlined in online supplementary Table 3 (S3). Sample sizes ranged from 22 to 4,486 participants and were mostly female (60%). Population-based samples tended to be larger than clinical trials. Participant ages were between 55 and 80 years. They had at least 12 years of education, except for one study with lower education levels [24]. Most were white, with some studies focusing exclusively on Asian [43] and Tanzanian/African cohorts [34]. Studies compared computerized assessments with MMSE, MoCA, imaging biomarkers, and cerebrospinal fluid biomarkers. Diagnostic subgroups included aMCI, mixed MCI, and early-onset MCI. Seven studies included individuals with subjective cognitive complaints or subjective memory complaints. Four studies (57%) treated these individuals as having cognitive impairment, while three studies (43%) used them as a control group. The dementia group included mild forms of AD, preclinical AD (Aβ+), ε3ε4 carriers, and non-AD dementia.
Characteristics of Computerized Tools
Tools were additionally classified into three types, as indicated in Table 3: in-clinic/tablet-based, remote assessments, and innovative data analysis. Remote assessments were the most strongly represented in this review, while groups 1 and 3 were roughly equally distributed.
Predominantly in-clinic and tablet-based cognitive assessment . | Remotely administered assessment for at-home use and communities using mobile devices and PCs . | Novel tools/use of AI . |
---|---|---|
CogState BB and NIH Toolbox BB | BHA | IDEA |
Computerized Cognitive Composite (C3) | SAGE | VST |
CANTAB | BrainCheck | Miro Health Mobile Assessment Platform |
dCDT | FACEmemory | CAKe |
C-ABC | BRANCH | RSVP |
BOCA | ICA | |
SLS | ||
EC-Screen | ||
SMART | ||
NNCT | ||
COST-A |
Predominantly in-clinic and tablet-based cognitive assessment . | Remotely administered assessment for at-home use and communities using mobile devices and PCs . | Novel tools/use of AI . |
---|---|---|
CogState BB and NIH Toolbox BB | BHA | IDEA |
Computerized Cognitive Composite (C3) | SAGE | VST |
CANTAB | BrainCheck | Miro Health Mobile Assessment Platform |
dCDT | FACEmemory | CAKe |
C-ABC | BRANCH | RSVP |
BOCA | ICA | |
SLS | ||
EC-Screen | ||
SMART | ||
NNCT | ||
COST-A |
Assessments were conducted on one type of tablet (38%), two specifically on iPads. Four studies (15%) provided a choice between PC and a tablet, while six studies (61%) were performed on either PCs or laptops only. Additionally, two studies (8%) were primarily conducted on smartphones and two (8%) used digital pens. Half were self-administered, and the rest were examiner-supported. Six (23%) provided automated scoring and reporting, while twelve (46%) offered automated scoring only. Five studies (19%) did not provide details about automated options.
Settings varied across the three types of tool. The in-clinic/tablet-based tools were used in population samples [22], community dwellings [23], memory clinics [27], and research settings [28]. Remote assessments were used in preclinical [32], community [33], population, and research settings [36]. Innovative data analysis tools focused primarily on clinical settings [40, 41, 46] but also on research [42, 44] community and population samples [43, 45]. Of these, only eight tools were validated with ≥50 participants per diagnostic group. Administration times ranged from 5 to 30 min. Twelve tools were commercially available, while three required no dedicated purchase. Feasibility outcomes were reported in most studies, with 22 showing preliminary outcomes and four providing comprehensive feasibility data.
Language availability varied (reported in 17 out of 26 studies), with some tools available in multiple languages. Four tools were available in two languages, and eight tools were available in three or more languages. Data security was reported for all tools except one (NNCT). A high-quality rating was, therefore, awarded to 46% of the studies.
Comparison of Cognitive Outcomes
We assessed the efficacy of computerized cognitive assessments versus standard tests in identifying and distinguishing between cognitive groups (NC, MCI, dementia). Most studies (n = 20, 78%) examined four or more domains like memory, attention, executive function, and visuospatial abilities. Three studies (11%) evaluated three domains and three others investigated two or fewer domains. Notably, the most effective test variables for accurate classification covered diverse cognitive domains, like (spatial) working memory, facial/visual memory, and association learning [22, 24].
1. MCI and Related Subtypes
The diagnostic performance of in-clinic computerized assessments was as effective as standard tests and biomarkers in (1) detecting MCI and (2) differentiating NC from MCI and subtypes. Five specific tools were administered solely to detect MCI, while four others were dedicated to distinguishing NC from MCI. In identifying subtypes like aMCI and non-aMCI, the sensitivity ranged from 57% to 76% [25, 30] with high test-retest reliability and intraclass correlation coefficients from 0.50 to 0.94 [28, 30, 32].
Additionally, Alden et al. [22] linked cognitive performance to AD-related biomarkers, amyloid (A) and tau (T), enhancing differentiation between NCs and MCI with positive biomarker status (AUC 0.75 to 0.93). However, NC versus MCI discrimination was low at 38%.
Four studies compared novel tools against established screenings such as MoCA or MMSE. Two were compared to more comprehensive neurocognitive tests [26]. The majority showed high convergent validity, indicated by high correlations from r = 0.30 to 0.90. Notably, the most effective test variables for accurate classification covered diverse cognitive domains, like (spatial) working memory, facial/visual memory, and association learning [22, 24]. Overall, subjects with probable MCI consistently exhibited slower or poorer cognitive performance compared to NC or subjective cognitive complaints across studies.
2. Dementia and Subtypes
The diagnostic performance of computerized tools for detecting (preclinical) dementia, such as AD or vascular dementia, was evaluated. The tools showed moderate correlations with standard tests (r = 0.30 to 0.51) [33, 37, 38]. Key variables for accurate classification included (spatial) memory, delayed recall, semantic fluency, and navigation skills [33, 34, 36, 37].
Davoudi et al. (2021) reported that graphomotoric output effectively classified subjects, achieving AUCs of 91.52% for NC versus AD and 76.94% for AD versus vascular dementia. The IDEA tool, used in a rural Tanzanian sample, achieved a 79% AUC for dementia detection [34].
Further analysis of psychometric properties showed that these tools eliminated practice effects using alternate forms, with moderate to high test-retest reliability (intraclass correlation coefficients from 0.71 to 0.81) [32, 36, 37]. Coughlan et al. (2022) examined cognitive markers in individuals with high (apolipoprotein allele [APOE] ε4 gene) and low genetic risk (APOE ε3 allele) for AD, finding that boundary-based navigation, memory, and completion time within these tools effectively detected preclinical AD, despite no deterioration over 18 months. Notably, the remaining studies lacked comparison to non-dementia participants.
3. MCI and AD
This category of computerized tools aimed at differentiation between MCI and dementia. Two studies analyzed tools for MCI versus dementia/AD, while eight studies compared individuals with NC.
For MCI subtypes versus AD, machine learning achieved 80–90% accuracy with the dCDT [38]. SAGE outperformed MMSE in detecting subtle changes in MCI status over 12–18 months [41]. Four tools showed significant positive correlations (r = 0.52 to 0.72) with standard paper-pencil tests (i.e., MoCA, MMSE, Clock drawing, ACE). AUC values for distinguishing MCI, dementia, and NC ranged from 0.72 (NNCT) to 0.95 (BHA).
Brain Check achieved 64% (NC vs. MCI) to over 80% (NC vs. dementia) accurate classification rates [42]. Similarly, the RSVP showed 91% accuracy in categorizing mild AD, MCI, and NC [46]. Sensitivities and specificities were as follows:
- (1)
MCI versus NC: sensitivities from 0.71 to 0.83 and specificities from 0.50 to 0.85.
- (2)
NC versus dementia or MCI: average sensitivity of 0.81, specificity 0.80 to 0.91.
- (3)
NC versus dementia: sensitivities from 0.88 to 0.90, specificities 0.74 to. 88.
Key aspects for detecting impairment included (delayed) recognition tests, visual target identification/categorization, and executive functions (e.g., DSST). The primary limitations were small sample sizes, lack of longitudinal data, and limited generalizability due to exclusion of psychiatric comorbidities, skewed gender representation, and higher education levels.
Discussion
Different diagnostic groups, such as AD, frontotemporal dementia, and MCI, show unique cognitive profiles, influencing tool selection. Tools for preclinical dementia and AD focused on memory and visuospatial functions [48], while those for MCI assessed memory, attention, language, executive function, and processing speed [49, 50]. Automated assessments allowed remote monitoring of cognitive changes in MCI, with advanced tools using AI to assess both MCI and dementia.
Reliability and Validity of Computerized Tools
Computerized tools showed high test-retest reliability for detecting aMCI and differentiating NC and MCI, as shown in previous research [51]. They offer advantages like improved precision, inter-rater reliability, and reduced staffing costs. However, three studies reported low sensitivity of AD-related PET biomarkers in differentiating NC and MCI due to amyloid and tau accumulation in normal aging [52, 53]. High convergent validity with standard tests indicates reliable measurement of cognitive function, aligning with the frequent involvement of (spatial) working memory, facial or visual memory, and association learning in predicting or classifying MCI [54, 55].
Advancements in Computerized Assessments
Computerized assessments, including dCDT, IDEA, VST, SHQ, and SAGE, offer several advantages over traditional tests. They effectively categorize subjects, predict preclinical AD, and simulate real-world tasks, enhancing ecological validity and supporting differential diagnosis. They best identify cognitive decline through delayed recognition tests, visual target identification, and executive function tasks. Impaired delayed recall signals the transition to MCI, especially in AD [49, 56]. Visual tasks assessing visuospatial and semantic memory aid early detection, while executive function tasks help differentiate MCI from dementia [15, 51, 52].
These tools are further particularly sensitive in distinguishing NC from dementia and can outperform traditional screening tests like the MMSE in detecting subtle changes over time. However, these methods are less effective at differentiating MCI from NC, indicating a need for further investigation. Recently, this limitation was addressed with CogEvo, a new computerized screening tool that effectively differentiated between groups with varying MMSE scores. Although CogEvo shows promise in detecting age-related cognitive decline without the limitations of the MMSE, such as educational bias and the ceiling effect, further refinement is needed to improve its discriminative power for early detection [57].
Technological Integration and Practical Considerations
At-home and community-based cognitive assessment tools improve accessibility and reflect real-world conditions, but face challenges like technological literacy and privacy concerns [8, 14, 58]. Integrating tablet-based tests into clinical practice can aid early detection and intervention, potentially reducing healthcare costs [58]. Automation of supervised tools may enhance clinical workflows, but feasibility studies are needed. The main advantages and associated challenges of technological integration into cognitive assessments are summarized in Table 4.
. | Advantages . | Challenges/disadvantages . |
---|---|---|
Patient-based |
|
|
Ease of administration |
|
|
Technological aspects |
|
. | Advantages . | Challenges/disadvantages . |
---|---|---|
Patient-based |
|
|
Ease of administration |
|
|
Technological aspects |
|
While such tools are increasingly utilized across various domains, including academic research and healthcare, their specific cost and pricing structures are often not disclosed to the public. Acquisition costs can vary significantly depending on the complexity of the technology. Some tools, such as BOCA, are freely available online, offering a cost-effective solution for broad-scale cognitive assessments. However, maintenance costs may arise to cover regular updates and security; for instance, the NIH Toolbox Cognitive Battery is available for an annual fee starting at USD 499 [59]. Additional costs include hardware, personnel training to ensure the reliability of assessments, as well as licensing fees. Investing in such tools supports longitudinal studies and large-scale screenings due to their seamless integration with electronic health systems and their capacity to analyze large volumes of data.
The context significantly impacts tool requirements; tools need higher specifications for clinical use than for research participant selection. Tablet-based tests have been integrated into clinical use when examiner-supported. At the same time, self and remotely administered tools sometimes lack reliability, despite certain in-clinic self-administered assessments have shown reliability and validity [22, 23, 37, 60]. Additionally, the accessibility of these tools must be carefully considered, particularly in the context of physical limitations, such as reduced mobility and visual or auditory deficits, which are common in patients with MCI and dementia as well as factors like the variety of end-user devices. This necessitates cognitive assessments that can be easily adapted to varying levels of physical ability, achieved through remote access from home, user-friendly interfaces, and customizable display options (e.g., adjustable text sizes and contrast settings).
AI in cognitive assessments offers precise, personalized evaluations tailored to an individual’s unique characteristics and cognitive fluctuations. AI can detect subtle patterns and adjust assessments based on cultural background and cognitive state, promising personalized interventions for cognitive enhancement [47]. However, these technologies raise ethical concerns around privacy and security, with potential risks of unauthorized access and data breaches. Robust data encryption, secure storage, and strict access controls are imperative to protect privacy. Transparent communication about data privacy measures and obtaining informed consent are essential ethical practices.
Moreover, AI algorithms might inherit biases from training data, leading to inaccurate assessments, particularly in diverse populations. Ensuring fairness and mitigating bias is critical, requiring regular audits, transparency in decision-making, and ongoing evaluation. While AI-driven assessments offer precision, they lack the human empathy of face-to-face interactions, which is vital for creating a supportive environment. Personal interaction provides emotional support, reassurance, and understanding. The absence of personal interaction in AI assessments may lead to a perceived impersonal approach, potentially affecting engagement and comfort.
Limitations
This review is not without limitations. The validation sample sizes (fewer than 50 participants per diagnostic group) may impact the reliability and generalizability of findings. The overrepresentation of females (60%) in most studies might skew the population representation. Using various devices (smartphones, tablets, PCs) necessitates robust normative data for accurate interpretation and comparison of scores across platforms to avoid misclassifications affecting diagnostic accuracy and treatment planning. Studies were limited to three databases and excluded subjects with comorbidities, which may affect cognitive performance and progression of cognitive decline. The heterogeneity in diagnostic groups, methodological differences, and diverse outcome measures made a quantitative review unfeasible. Finally, studies were limited to English potentially overlooking promising tools in other languages.
Despite limitations, this review advances the understanding of computerized cognitive assessments by synthesizing relevant studies and identifying trends. Its strength lies in considering psychometric qualities, technological aspects, and functional relevance across diagnostic groups and contextual settings. Moving forward, future feasibility studies should explore implementation requirements for different disease stages and contextual settings to enhance early diagnosis of MCI and support differentiation and monitoring dementia in older adults.
Conclusion
The review highlights the latest advances in computerized cognitive assessments for evaluating MCI and dementia, increasingly prevalent globally among older adults. The link between technological advancements and empirical evidence is crucial for understanding cognitive function across diverse populations and environments. Computerized assessments offer significant benefits over traditional methods, such as minimizing errors and enabling real-time interpretation. They are particularly effective in tracking cognitive changes over time, crucial for early intervention. A collaborative approach involving researchers, clinicians, and policymakers is essential to harness these benefits, addressing challenges, and improving early diagnosis and monitoring of cognitive decline.
Statement of Ethics
An ethics statement is not applicable because this study is based solely on the published literature.
Conflict of Interest Statement
The authors have no conflicts of interest to declare.
Funding Sources
This study was not supported by any sponsor or funder.
Author Contributions
C.H. conceptualized and designed the study and managed the database. Search strings were determined and data collected by C.H. and S.S. All authors contributed to reviewing and analyzing data. C.H., S.S., and C.N.W. drafted the manuscript. All authors revised the manuscript and read and approved the submitted version.
Data Availability Statement
The studies presented in the review are included in the article and the supplementary material. Further inquiries can be directed to the corresponding author.