Introduction: Timely detection of cognitive impairment such as mild cognitive impairment (MCI) or dementia is pivotal in initiating early interventions to improve patients’ quality of life. Conventional paper-pencil tests, though common, have limited sensitivity in detecting subtle cognitive changes. Computerized assessments offer promising alternatives, overcoming time and manual scoring constraints while potentially providing greater sensitivity. Methods: A literature search yielded 26 eligible articles (2020–2023). The articles were reviewed according to PRISMA guidelines, and the computerized tools were categorized by diagnostic outcome (MCI, dementia, combined). Results: The subjects included in the studies were aged 55–77 years. The overall gender distribution comprised 60% females and 40% males. The sample sizes varied considerably from 22 to 4,486. Convergent validity assessments in 20 studies demonstrated strong positive correlations with traditional tests. Overall classification accuracy in detecting MCI or dementia, distinguishing from normal cognition (NC), reached up to 91%. Impressively, 46% of the studies received high-quality ratings, underscoring the reliability and validity of the findings. Conclusion: The review highlights the advancements in computerized cognitive assessments for assessing MCI and dementia. This shift toward technology-based assessments could enhance detection capabilities and facilitate timely interventions for better patient outcomes.

The prevalence of dementia is rising as the global population ages, yet it remains significantly underdiagnosed [1, 2]. A Johns Hopkins University study found that 39.5% of individuals over 65 years meeting the dementia criteria were undiagnosed [3]. Another study estimated that globally, 75% of individuals with dementia are undiagnosed [4]. Detecting dementia early is important, as the number of dementia cases is expected to rise significantly; according to the World Health Organization (WHO), projections are 78 million in the next decade and potentially soaring to 139 million by 2050 [5].

The underdiagnosis of dementia is mainly due to the complexities of clinical diagnosis, which involves time-consuming tests and documentation. Computerized cognitive assessments can help alleviate this issue by providing automated administration and real-time scoring, thereby reducing costs, minimizing errors, decreasing intra- or interexaminer variability, and offering interpretative reports and care recommendations. This could potentially lead to faster and easier clinical decision-making and improved patient outcomes [6, 7].

Current paper- and pencil-based screenings like the Mini-Mental State Examination (MMSE), the Clock Drawing Test, and the Mini-Cog are accurate in detecting dementia but less sensitive to early-stage cognitive changes and are time-consuming for staff. They are also only conducted once at a given clinical examination, with long periods to checkup exams [8]. New technology can mitigate these limitations [9]. Mobile devices enable repeated testing with alternate versions, reducing practice effects and enhancing accuracy and validity through longitudinal data [10]. This also allows for the creation of individualized cognitive trajectory profiles.

Detecting cognitive impairment is crucial for effective interventions that delay cognitive decline and improve quality of life. The widespread use of smartphones and tablets in modern society makes self-administered assessments feasible [11, 12], offering the potential for the earliest and easiest detection of progression from healthy aging to cognitive impairment. Mild cognitive impairment (MCI) marks a transitional stage between normal aging and dementia, and, in the case of amnestic MCI (aMCI) presents with episodic memory impairment and raises the risk of progressing to Alzheimer’s disease (AD) [13].

Web-based tools offer language options and self-administration, improving accessibility [10], especially in remote areas [14]. For instance, the neotiv platform offers a “remote digital memory composite score” comprising three nonverbal memory subtests to distinguish between healthy individuals and those with MCI [15]. Computerized tools can measure neurobehavioral patterns and incorporate artificial intelligence (AI), complementing traditional tests [16]. Most importantly, ecologically valid repeated tests can enable patients to take assessments during fatigue and when they are well rested, offering a more accurate reading of the effects of tiredness on cognitive performance.

Despite these benefits, digital cognitive assessments present challenges, including hard- and software variability, Internet quality, data protection concerns, and examinee-related factors like technology familiarity and anxiety [17]. A study found differences between traditional and digital assessments like MoCA (eMoCA), particularly in visuospatial/executive domains, for participants with less touchscreen experience [18]. Privacy concerns also arise from storing and sharing cognitive data, especially when collecting identifiable data, like handwriting or voice recordings [19]. Additionally, while many older adults have adopted digital technology, some still lack access or expertise, potentially excluding them from research.

A recent review highlighted 10 self-administered computerized cognitive measures proposed for clinical settings only, prompting an evaluation of their potential application in nonclinical settings [20]. This review aims to assess how technological advancements can meet evolving societal needs. Our systematic review focuses on self-administered and examiner-supported computerized cognitive assessments for older adults, investigating their validity and reliability in distinguishing between normal cognition (NC), MCI, and dementia in this demographic.

Search Strategy

From March 1 to May 31, 2023, a systematic literature search following Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [21]. PubMed, Scopus, and Web of Science were searched using terms such as “cognitive assessment,” “computerized,” “dementia,” “older adults,” and “MCI.” The detailed search string is in Supplementary Material 1 (SM1) (for all online suppl. material, see https://doi.org/10.1159/000541627). Only peer-reviewed articles in English published between 2020 and 2023 were considered, following specific inclusion and exclusion criteria outlined in Table 1.

Table 1.

Inclusion and exclusion criteria for literature search

Inclusion criteria 
  • Studies reporting computerized cognitive assessments aimed at detecting MCI and/or dementia

  • Studies with a control group (participants with NC)

  • Participants’ age ≥50 years

  • Studies providing psychometric background validity information

  • Studies describing self-administered or examiner-supported cognitive assessment tools

  • Studies with diagnosis according to standardized diagnostic criteria, such as MMSE, MoCA, CDR, or diagnosis from a neurologist, e.g., based on brain imaging, or blood tests

 
Exclusion criteria 
  • Studies describing computerized cognitive assessment tools in individuals with comorbidities, such as Parkinson’s disease, multiple sclerosis, etc.

  • Studies reporting computerized cognitive assessments that are not validated (i.e., tested and confirmed for accuracy and reliability) in English

  • Studies reporting complex or costly hardware or software components, e.g., 3D glasses for virtual reality or eye-tracking applications

  • Studies language other than English

 
Inclusion criteria 
  • Studies reporting computerized cognitive assessments aimed at detecting MCI and/or dementia

  • Studies with a control group (participants with NC)

  • Participants’ age ≥50 years

  • Studies providing psychometric background validity information

  • Studies describing self-administered or examiner-supported cognitive assessment tools

  • Studies with diagnosis according to standardized diagnostic criteria, such as MMSE, MoCA, CDR, or diagnosis from a neurologist, e.g., based on brain imaging, or blood tests

 
Exclusion criteria 
  • Studies describing computerized cognitive assessment tools in individuals with comorbidities, such as Parkinson’s disease, multiple sclerosis, etc.

  • Studies reporting computerized cognitive assessments that are not validated (i.e., tested and confirmed for accuracy and reliability) in English

  • Studies reporting complex or costly hardware or software components, e.g., 3D glasses for virtual reality or eye-tracking applications

  • Studies language other than English

 

After duplicate removal, 2,149 reports underwent screening for exclusion criteria based on titles and abstracts. Sixty-five articles were selected for full-text review to assess eligibility (Fig. 1). Of these records, 23 met inclusion criteria, with three additional records found through citation searching, totaling 26 articles in the review. The primary reasons for exclusion were studies including participants with comorbidities, absence of MCI or dementia diagnosis, and participants younger than 50 years.

Fig. 1.

PRISMA flow diagram for identification of publications.

Fig. 1.

PRISMA flow diagram for identification of publications.

Close modal

Data Extraction

The research findings were managed using Mendeley Reference Manager (Desktop v1.19.8) and examined by two authors. Articles underwent title and abstract screening. During the full-text review, data extracted for each study included: (1) tool name and characteristics, (2) composition of the validation sample, (3) validation data and effect size, (4) standard neuropsychological tests used for comparison, and (5) study setting. Additional data, including scoring automation and system availability, were collected to assess tool quality.

Quality Assessment

A modified model based on Tsoy et al. (2021) was used for the quality assessment. Parameters included investigated domains, sample size, reliability, validity, automation level, administration types, marketplace availability, scoring/reporting facilities, language options, data security, and feasibility studies (online suppl. Tables 1, 2). Studies rated three or higher in at least four areas were considered to be of high quality.

The 26 studies reviewed were categorized into three groups: MCI and subtypes, (preclinical) dementia, and both. Group 1 included 11 studies using nine cognitive tools, group 2 had six studies with various tools, and group 3 comprised ten distinct computerized cognitive assessments (see Table 2).

Table 2.

Overview of cognitive tools across studies for three diagnosis groups

Intended diagnoses groupsComputerized cognitive toolStudies
(1) MCI and subtypes (SMC, aMCI, eoMCI) CBB Alden et al. [22] (2021), Kairys et al. [23] (2022) 
NIHTB-CB Kairys et al. [23] (2022) 
CANTAB Campos-Magdaleno et al. [24] (2020) 
BHA Paterson et al. [25] (2021) 
FACEmemory® Alegret et al. [26] (2022) 
BOCA Vyshedskiy et al. [27] (2022) 
SMART Dorociak et al. [28] (2021) 
VST Iliadou et al. [29] (2021), Zygouris et al. [30] (2020) 
CAKe Fritch et al. [31] (2023) 
(2) (Preclinical) dementia: AD, VaD C3 Papp et al. [32] (2021) 
dCDT Davoudi et al. [33] (2021) 
IDEA cognitive screen Paddick et al. [34] (2021) 
SLS Stricker et al. [35] (2022) 
VST, SHQ Coughlan et al. [36] (2020) 
BRANCH Papp et al. [37] (2021) 
(3) Both (1) and (2) dCDT Binaco et al. [38] (2020) 
C-ABC Noguchi-Shinohara et al. [39] (2020) 
BHA Rodríguez-Salgado et al. [40] (2021) 
SAGE Scharre et al. [41] (2021) 
BrainCheck Ye et al. [42] (2022) 
EC-Screen Chan et al. [43] (2020) 
NNCT Oliva and Losa, [44] (2022) 
COST-A Visser et al. [45] (2021) 
RSVP Perez-Valero et al. [46] (2023) 
ICA Kalafatis et al. [47] (2021) 
Intended diagnoses groupsComputerized cognitive toolStudies
(1) MCI and subtypes (SMC, aMCI, eoMCI) CBB Alden et al. [22] (2021), Kairys et al. [23] (2022) 
NIHTB-CB Kairys et al. [23] (2022) 
CANTAB Campos-Magdaleno et al. [24] (2020) 
BHA Paterson et al. [25] (2021) 
FACEmemory® Alegret et al. [26] (2022) 
BOCA Vyshedskiy et al. [27] (2022) 
SMART Dorociak et al. [28] (2021) 
VST Iliadou et al. [29] (2021), Zygouris et al. [30] (2020) 
CAKe Fritch et al. [31] (2023) 
(2) (Preclinical) dementia: AD, VaD C3 Papp et al. [32] (2021) 
dCDT Davoudi et al. [33] (2021) 
IDEA cognitive screen Paddick et al. [34] (2021) 
SLS Stricker et al. [35] (2022) 
VST, SHQ Coughlan et al. [36] (2020) 
BRANCH Papp et al. [37] (2021) 
(3) Both (1) and (2) dCDT Binaco et al. [38] (2020) 
C-ABC Noguchi-Shinohara et al. [39] (2020) 
BHA Rodríguez-Salgado et al. [40] (2021) 
SAGE Scharre et al. [41] (2021) 
BrainCheck Ye et al. [42] (2022) 
EC-Screen Chan et al. [43] (2020) 
NNCT Oliva and Losa, [44] (2022) 
COST-A Visser et al. [45] (2021) 
RSVP Perez-Valero et al. [46] (2023) 
ICA Kalafatis et al. [47] (2021) 

eoMCI, early-onset MCI; SMC, subjective memory complaints; VaD, vascular dementia.

Characteristics of Included Studies

The characteristics of the 26 studies are outlined in online supplementary Table 3 (S3). Sample sizes ranged from 22 to 4,486 participants and were mostly female (60%). Population-based samples tended to be larger than clinical trials. Participant ages were between 55 and 80 years. They had at least 12 years of education, except for one study with lower education levels [24]. Most were white, with some studies focusing exclusively on Asian [43] and Tanzanian/African cohorts [34]. Studies compared computerized assessments with MMSE, MoCA, imaging biomarkers, and cerebrospinal fluid biomarkers. Diagnostic subgroups included aMCI, mixed MCI, and early-onset MCI. Seven studies included individuals with subjective cognitive complaints or subjective memory complaints. Four studies (57%) treated these individuals as having cognitive impairment, while three studies (43%) used them as a control group. The dementia group included mild forms of AD, preclinical AD (Aβ+), ε3ε4 carriers, and non-AD dementia.

Characteristics of Computerized Tools

Tools were additionally classified into three types, as indicated in Table 3: in-clinic/tablet-based, remote assessments, and innovative data analysis. Remote assessments were the most strongly represented in this review, while groups 1 and 3 were roughly equally distributed.

Table 3.

Classification of cognitive computerized tools

Predominantly in-clinic and tablet-based cognitive assessmentRemotely administered assessment for at-home use and communities using mobile devices and PCsNovel tools/use of AI
CogState BB and NIH Toolbox BB BHA IDEA 
Computerized Cognitive Composite (C3) SAGE VST 
CANTAB BrainCheck Miro Health Mobile Assessment Platform 
dCDT FACEmemory CAKe 
C-ABC BRANCH RSVP 
 BOCA ICA 
SLS  
EC-Screen 
SMART 
NNCT 
COST-A 
Predominantly in-clinic and tablet-based cognitive assessmentRemotely administered assessment for at-home use and communities using mobile devices and PCsNovel tools/use of AI
CogState BB and NIH Toolbox BB BHA IDEA 
Computerized Cognitive Composite (C3) SAGE VST 
CANTAB BrainCheck Miro Health Mobile Assessment Platform 
dCDT FACEmemory CAKe 
C-ABC BRANCH RSVP 
 BOCA ICA 
SLS  
EC-Screen 
SMART 
NNCT 
COST-A 

Assessments were conducted on one type of tablet (38%), two specifically on iPads. Four studies (15%) provided a choice between PC and a tablet, while six studies (61%) were performed on either PCs or laptops only. Additionally, two studies (8%) were primarily conducted on smartphones and two (8%) used digital pens. Half were self-administered, and the rest were examiner-supported. Six (23%) provided automated scoring and reporting, while twelve (46%) offered automated scoring only. Five studies (19%) did not provide details about automated options.

Settings varied across the three types of tool. The in-clinic/tablet-based tools were used in population samples [22], community dwellings [23], memory clinics [27], and research settings [28]. Remote assessments were used in preclinical [32], community [33], population, and research settings [36]. Innovative data analysis tools focused primarily on clinical settings [40, 41, 46] but also on research [42, 44] community and population samples [43, 45]. Of these, only eight tools were validated with ≥50 participants per diagnostic group. Administration times ranged from 5 to 30 min. Twelve tools were commercially available, while three required no dedicated purchase. Feasibility outcomes were reported in most studies, with 22 showing preliminary outcomes and four providing comprehensive feasibility data.

Language availability varied (reported in 17 out of 26 studies), with some tools available in multiple languages. Four tools were available in two languages, and eight tools were available in three or more languages. Data security was reported for all tools except one (NNCT). A high-quality rating was, therefore, awarded to 46% of the studies.

Comparison of Cognitive Outcomes

We assessed the efficacy of computerized cognitive assessments versus standard tests in identifying and distinguishing between cognitive groups (NC, MCI, dementia). Most studies (n = 20, 78%) examined four or more domains like memory, attention, executive function, and visuospatial abilities. Three studies (11%) evaluated three domains and three others investigated two or fewer domains. Notably, the most effective test variables for accurate classification covered diverse cognitive domains, like (spatial) working memory, facial/visual memory, and association learning [22, 24].

1. MCI and Related Subtypes

The diagnostic performance of in-clinic computerized assessments was as effective as standard tests and biomarkers in (1) detecting MCI and (2) differentiating NC from MCI and subtypes. Five specific tools were administered solely to detect MCI, while four others were dedicated to distinguishing NC from MCI. In identifying subtypes like aMCI and non-aMCI, the sensitivity ranged from 57% to 76% [25, 30] with high test-retest reliability and intraclass correlation coefficients from 0.50 to 0.94 [28, 30, 32].

Additionally, Alden et al. [22] linked cognitive performance to AD-related biomarkers, amyloid (A) and tau (T), enhancing differentiation between NCs and MCI with positive biomarker status (AUC 0.75 to 0.93). However, NC versus MCI discrimination was low at 38%.

Four studies compared novel tools against established screenings such as MoCA or MMSE. Two were compared to more comprehensive neurocognitive tests [26]. The majority showed high convergent validity, indicated by high correlations from r = 0.30 to 0.90. Notably, the most effective test variables for accurate classification covered diverse cognitive domains, like (spatial) working memory, facial/visual memory, and association learning [22, 24]. Overall, subjects with probable MCI consistently exhibited slower or poorer cognitive performance compared to NC or subjective cognitive complaints across studies.

2. Dementia and Subtypes

The diagnostic performance of computerized tools for detecting (preclinical) dementia, such as AD or vascular dementia, was evaluated. The tools showed moderate correlations with standard tests (r = 0.30 to 0.51) [33, 37, 38]. Key variables for accurate classification included (spatial) memory, delayed recall, semantic fluency, and navigation skills [33, 34, 36, 37].

Davoudi et al. (2021) reported that graphomotoric output effectively classified subjects, achieving AUCs of 91.52% for NC versus AD and 76.94% for AD versus vascular dementia. The IDEA tool, used in a rural Tanzanian sample, achieved a 79% AUC for dementia detection [34].

Further analysis of psychometric properties showed that these tools eliminated practice effects using alternate forms, with moderate to high test-retest reliability (intraclass correlation coefficients from 0.71 to 0.81) [32, 36, 37]. Coughlan et al. (2022) examined cognitive markers in individuals with high (apolipoprotein allele [APOE] ε4 gene) and low genetic risk (APOE ε3 allele) for AD, finding that boundary-based navigation, memory, and completion time within these tools effectively detected preclinical AD, despite no deterioration over 18 months. Notably, the remaining studies lacked comparison to non-dementia participants.

3. MCI and AD

This category of computerized tools aimed at differentiation between MCI and dementia. Two studies analyzed tools for MCI versus dementia/AD, while eight studies compared individuals with NC.

For MCI subtypes versus AD, machine learning achieved 80–90% accuracy with the dCDT [38]. SAGE outperformed MMSE in detecting subtle changes in MCI status over 12–18 months [41]. Four tools showed significant positive correlations (r = 0.52 to 0.72) with standard paper-pencil tests (i.e., MoCA, MMSE, Clock drawing, ACE). AUC values for distinguishing MCI, dementia, and NC ranged from 0.72 (NNCT) to 0.95 (BHA).

Brain Check achieved 64% (NC vs. MCI) to over 80% (NC vs. dementia) accurate classification rates [42]. Similarly, the RSVP showed 91% accuracy in categorizing mild AD, MCI, and NC [46]. Sensitivities and specificities were as follows:

  • (1)

    MCI versus NC: sensitivities from 0.71 to 0.83 and specificities from 0.50 to 0.85.

  • (2)

    NC versus dementia or MCI: average sensitivity of 0.81, specificity 0.80 to 0.91.

  • (3)

    NC versus dementia: sensitivities from 0.88 to 0.90, specificities 0.74 to. 88.

Key aspects for detecting impairment included (delayed) recognition tests, visual target identification/categorization, and executive functions (e.g., DSST). The primary limitations were small sample sizes, lack of longitudinal data, and limited generalizability due to exclusion of psychiatric comorbidities, skewed gender representation, and higher education levels.

Different diagnostic groups, such as AD, frontotemporal dementia, and MCI, show unique cognitive profiles, influencing tool selection. Tools for preclinical dementia and AD focused on memory and visuospatial functions [48], while those for MCI assessed memory, attention, language, executive function, and processing speed [49, 50]. Automated assessments allowed remote monitoring of cognitive changes in MCI, with advanced tools using AI to assess both MCI and dementia.

Reliability and Validity of Computerized Tools

Computerized tools showed high test-retest reliability for detecting aMCI and differentiating NC and MCI, as shown in previous research [51]. They offer advantages like improved precision, inter-rater reliability, and reduced staffing costs. However, three studies reported low sensitivity of AD-related PET biomarkers in differentiating NC and MCI due to amyloid and tau accumulation in normal aging [52, 53]. High convergent validity with standard tests indicates reliable measurement of cognitive function, aligning with the frequent involvement of (spatial) working memory, facial or visual memory, and association learning in predicting or classifying MCI [54, 55].

Advancements in Computerized Assessments

Computerized assessments, including dCDT, IDEA, VST, SHQ, and SAGE, offer several advantages over traditional tests. They effectively categorize subjects, predict preclinical AD, and simulate real-world tasks, enhancing ecological validity and supporting differential diagnosis. They best identify cognitive decline through delayed recognition tests, visual target identification, and executive function tasks. Impaired delayed recall signals the transition to MCI, especially in AD [49, 56]. Visual tasks assessing visuospatial and semantic memory aid early detection, while executive function tasks help differentiate MCI from dementia [15, 51, 52].

These tools are further particularly sensitive in distinguishing NC from dementia and can outperform traditional screening tests like the MMSE in detecting subtle changes over time. However, these methods are less effective at differentiating MCI from NC, indicating a need for further investigation. Recently, this limitation was addressed with CogEvo, a new computerized screening tool that effectively differentiated between groups with varying MMSE scores. Although CogEvo shows promise in detecting age-related cognitive decline without the limitations of the MMSE, such as educational bias and the ceiling effect, further refinement is needed to improve its discriminative power for early detection [57].

Technological Integration and Practical Considerations

At-home and community-based cognitive assessment tools improve accessibility and reflect real-world conditions, but face challenges like technological literacy and privacy concerns [8, 14, 58]. Integrating tablet-based tests into clinical practice can aid early detection and intervention, potentially reducing healthcare costs [58]. Automation of supervised tools may enhance clinical workflows, but feasibility studies are needed. The main advantages and associated challenges of technological integration into cognitive assessments are summarized in Table 4.

Table 4.

Textbox summarizing advantages and challenges/disadvantages of using (AI) technology for cognitive assessment

AdvantagesChallenges/disadvantages
Patient-based 
  • Precise, personalized evaluations

  • Detects subtle patterns

  • Adjusts for cultural background

  • Facilitates remote monitoring

  • Ecological testing environment/task types

 
  • Lacks human empathy

  • Can negatively affect engagement and comfort

  • Barriers due to access to technology and technological literacy

 
Ease of administration 
  • High precision and consistency

  • Comparative ease and efficiency of administration, evaluation and reporting of results

  • Outperforms traditional tests in detecting changes

  • Reduces staffing costs

 
  • Requires regular audits, updates to devices and transparency

 
Technological aspects  
  • Privacy and security concerns

  • Risks of data breaches

  • Potential biases in training data

  • Need for clear data privacy protection and communication

 
AdvantagesChallenges/disadvantages
Patient-based 
  • Precise, personalized evaluations

  • Detects subtle patterns

  • Adjusts for cultural background

  • Facilitates remote monitoring

  • Ecological testing environment/task types

 
  • Lacks human empathy

  • Can negatively affect engagement and comfort

  • Barriers due to access to technology and technological literacy

 
Ease of administration 
  • High precision and consistency

  • Comparative ease and efficiency of administration, evaluation and reporting of results

  • Outperforms traditional tests in detecting changes

  • Reduces staffing costs

 
  • Requires regular audits, updates to devices and transparency

 
Technological aspects  
  • Privacy and security concerns

  • Risks of data breaches

  • Potential biases in training data

  • Need for clear data privacy protection and communication

 

While such tools are increasingly utilized across various domains, including academic research and healthcare, their specific cost and pricing structures are often not disclosed to the public. Acquisition costs can vary significantly depending on the complexity of the technology. Some tools, such as BOCA, are freely available online, offering a cost-effective solution for broad-scale cognitive assessments. However, maintenance costs may arise to cover regular updates and security; for instance, the NIH Toolbox Cognitive Battery is available for an annual fee starting at USD 499 [59]. Additional costs include hardware, personnel training to ensure the reliability of assessments, as well as licensing fees. Investing in such tools supports longitudinal studies and large-scale screenings due to their seamless integration with electronic health systems and their capacity to analyze large volumes of data.

The context significantly impacts tool requirements; tools need higher specifications for clinical use than for research participant selection. Tablet-based tests have been integrated into clinical use when examiner-supported. At the same time, self and remotely administered tools sometimes lack reliability, despite certain in-clinic self-administered assessments have shown reliability and validity [22, 23, 37, 60]. Additionally, the accessibility of these tools must be carefully considered, particularly in the context of physical limitations, such as reduced mobility and visual or auditory deficits, which are common in patients with MCI and dementia as well as factors like the variety of end-user devices. This necessitates cognitive assessments that can be easily adapted to varying levels of physical ability, achieved through remote access from home, user-friendly interfaces, and customizable display options (e.g., adjustable text sizes and contrast settings).

AI in cognitive assessments offers precise, personalized evaluations tailored to an individual’s unique characteristics and cognitive fluctuations. AI can detect subtle patterns and adjust assessments based on cultural background and cognitive state, promising personalized interventions for cognitive enhancement [47]. However, these technologies raise ethical concerns around privacy and security, with potential risks of unauthorized access and data breaches. Robust data encryption, secure storage, and strict access controls are imperative to protect privacy. Transparent communication about data privacy measures and obtaining informed consent are essential ethical practices.

Moreover, AI algorithms might inherit biases from training data, leading to inaccurate assessments, particularly in diverse populations. Ensuring fairness and mitigating bias is critical, requiring regular audits, transparency in decision-making, and ongoing evaluation. While AI-driven assessments offer precision, they lack the human empathy of face-to-face interactions, which is vital for creating a supportive environment. Personal interaction provides emotional support, reassurance, and understanding. The absence of personal interaction in AI assessments may lead to a perceived impersonal approach, potentially affecting engagement and comfort.

Limitations

This review is not without limitations. The validation sample sizes (fewer than 50 participants per diagnostic group) may impact the reliability and generalizability of findings. The overrepresentation of females (60%) in most studies might skew the population representation. Using various devices (smartphones, tablets, PCs) necessitates robust normative data for accurate interpretation and comparison of scores across platforms to avoid misclassifications affecting diagnostic accuracy and treatment planning. Studies were limited to three databases and excluded subjects with comorbidities, which may affect cognitive performance and progression of cognitive decline. The heterogeneity in diagnostic groups, methodological differences, and diverse outcome measures made a quantitative review unfeasible. Finally, studies were limited to English potentially overlooking promising tools in other languages.

Despite limitations, this review advances the understanding of computerized cognitive assessments by synthesizing relevant studies and identifying trends. Its strength lies in considering psychometric qualities, technological aspects, and functional relevance across diagnostic groups and contextual settings. Moving forward, future feasibility studies should explore implementation requirements for different disease stages and contextual settings to enhance early diagnosis of MCI and support differentiation and monitoring dementia in older adults.

The review highlights the latest advances in computerized cognitive assessments for evaluating MCI and dementia, increasingly prevalent globally among older adults. The link between technological advancements and empirical evidence is crucial for understanding cognitive function across diverse populations and environments. Computerized assessments offer significant benefits over traditional methods, such as minimizing errors and enabling real-time interpretation. They are particularly effective in tracking cognitive changes over time, crucial for early intervention. A collaborative approach involving researchers, clinicians, and policymakers is essential to harness these benefits, addressing challenges, and improving early diagnosis and monitoring of cognitive decline.

An ethics statement is not applicable because this study is based solely on the published literature.

The authors have no conflicts of interest to declare.

This study was not supported by any sponsor or funder.

C.H. conceptualized and designed the study and managed the database. Search strings were determined and data collected by C.H. and S.S. All authors contributed to reviewing and analyzing data. C.H., S.S., and C.N.W. drafted the manuscript. All authors revised the manuscript and read and approved the submitted version.

The studies presented in the review are included in the article and the supplementary material. Further inquiries can be directed to the corresponding author.

1.
Rasmussen
J
,
Langerman
H
.
Alzheimer’s disease: why we need early diagnosis
.
Degener Neurol Neuromuscul Dis
.
2019
;
9
:
123
30
.
2.
Lang
L
,
Clifford
A
,
Wei
L
,
Zhang
D
,
Leung
D
,
Augustine
G
, et al
.
Prevalence and determinants of undetected dementia in the community: a systematic literature review and a meta-analysis
.
BMJ Open
.
2017
;
7
(
2
):
e011146
.
3.
Amjad
H
,
Roth
D
,
Sheehan
O
,
Lyketsos
C
,
Wolff
J
,
Samus
Q
.
Underdiagnosis of dementia: an observational study of patterns in diagnosis and awareness in US older adults
.
J Gen Intern Med
.
2018
:
33
.
4.
Alzheimer’s Disease International (ADI)
.
World Alzheimer report 2021: journey through the diagnosis of dementia
.
London
;
2021
.
5.
WHO
.
Global status report on the public health response to dementia
.
2021
.
6.
Possin
KL
,
Moskowitz
T
,
Erlhoff
SJ
,
Rogers
KM
,
Johnson
ET
,
Steele
NZR
, et al
.
The brain health assessment for detecting and diagnosing neurocognitive disorders
.
J Am Geriatr Soc
.
2018
;
66
(
1
):
150
6
.
7.
Djulbegovic
B
,
Guyatt
G
.
Evidence vs consensus in clinical practice guidelines
.
JAMA
.
2019
:
322
.
8.
Scott
J
,
Mayo
A
.
Instruments for detection and screening of cognitive impairment for older adults in primary care settings: a review
.
Geriatr Nurs
.
2018
;
39
(
3
):
323
9
.
9.
Öhman
F
,
Hassenstab
J
,
Berron
D
,
Schöll
M
,
Papp
K
.
Current advances in digital cognitive assessment for preclinical Alzheimer’s disease
. In:
Alzheimer s and dementia diagnosis assessment and disease monitoring
.
2021
.
10.
Parsons
T
,
Mcmahan
T
,
Kane
R
.
Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms
.
Clin Neuropsychol
.
2018
;
32
:
16
41
.
11.
Wilson
S
,
Byrne
P
,
Rodgers
S
,
Maden
M
.
A systematic review of smartphone and tablet use by older adults with and without cognitive impairment
.
Innov Aging
.
2022
;
6
(
2
):
igac002
.
12.
Vaportzis
E
,
Clausen
M
,
Gow
A
.
Older adults perceptions of technology and barriers to interacting with tablet computers: a focus group study
.
Front Psychol
.
2017
;
8
:
1687
.
13.
Csukly
G
,
Sirály
E
,
Fodor
Z
,
Horváth
A
,
Salacz
P
,
Hidasi
Z
, et al
.
The differentiation of amnestic type MCI from the non-amnestic types by structural MRI
.
Front Aging Neurosci
.
2016
;
8
:
52
.
14.
Zeghari
R
,
Guerchouche
R
,
Tran Duc
M
,
Bremond
F
,
Lemoine
MP
,
Bultingaire
V
, et al
.
Pilot study to assess the feasibility of a mobile unit for remote cognitive screening of isolated elderly in rural areas
.
Int J Environ Res Public Health
.
2021
;
18
(
11
):
6108
.
15.
Berron
D
,
Glanz
W
,
Clark
L
,
Basche
K
,
Grande
X
,
Güsten
J
, et al
.
A remote digital memory composite to detect cognitive impairment in memory clinic samples in unsupervised settings using mobile devices
.
NPJ Digit Med
.
2024
;
7
(
1
):
79
.
16.
Laske
C
,
Sohrabi
HR
,
Frost
SM
,
López-de-Ipiña
K
,
Garrard
P
,
Buscema
M
, et al
.
Innovative diagnostic tools for early detection of Alzheimer’s disease
.
Alzheim Demen
.
2015
;
11
(
5
):
561
78
.
17.
Werner
P
,
Korczyn
A
.
Willingness to use computerized systems for the diagnosis of dementia testing a theoretical model in an Israeli sample
.
Alzheimer Dis Assoc Disord
.
2012
;
26
(
2
):
171
8
.
18.
Wallace
S
,
Donoso Brown
E
,
Simpson
R
,
D’Acunto
K
,
Kranjec
A
,
Rodgers
M
, et al
.
A comparison of electronic and paper versions of the Montreal cognitive assessment
.
Alzheimer Dis Assoc Disord
.
2019
;
33
(
3
):
272
8
.
19.
Kerkhoff
TR
,
Hanson
SL
.
Ethics in the practice of neuropsychology
.
Encyclopedia of clinical neuropsychology
.
New York, NY
:
Springer New York
;
2011
. p.
977
81
.
20.
Tsoy
E
,
Zygouris
S
,
Possin
KL
.
Current state of self-administered brief computerized cognitive assessments for detection of cognitive disorders in older adults: a systematic review
.
J Prev Alzheimers Dis
.
2021
;
8
(
3
):
267
76
.
21.
Page
MJ
,
McKenzie
JE
,
Bossuyt
PM
,
Boutron
I
,
Hoffmann
TC
,
Mulrow
CD
, et al
.
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
.
Syst Rev
.
2021
;
10
(
1
):
89
.
22.
Alden
EC
,
Pudumjee
SB
,
Lundt
ES
,
Albertson
SM
,
Machulda
MM
,
Kremers
WK
, et al
.
Diagnostic accuracy of the Cogstate Brief Battery for prevalent MCI and prodromal AD (MCI A+T+) in a population-based sample
.
Alzheimers Dement
.
2021
;
17
(
4
):
584
94
.
23.
Kairys
A
,
Daugherty
A
,
Kavcic
V
,
Shair
S
,
Persad
C
,
Heidebrink
J
, et al
.
Laptop-administered NIH Toolbox and cogstate brief battery in community-dwelling black adults: unexpected pattern of cognitive performance between MCI and healthy controls
.
J Int Neuropsychol Soc
.
2022
;
28
(
3
):
239
48
.
24.
Campos-Magdaleno
M
,
Leiva
D
,
Pereiro
AX
,
Lojo-Seoane
C
,
Mallo
SC
,
Facal
D
, et al
.
Changes in visual memory in mild cognitive impairment: a longitudinal study with CANTAB
.
Psychol Med
.
2021
;
51
(
14
):
2465
75
.
25.
Paterson
T
,
Sivajohan
B
,
Gardner
S
,
Binns
M
,
Stokes
K
,
Freedman
M
, et al
.
Accuracy of a self-administered online cognitive assessment in detecting amnestic mild cognitive impairment
.
J Gerontol Ser B
.
2021
:
77
.
26.
Alegret
M
,
Sotolongo-Grau
O
,
de Antonio
EE
,
Pérez-Cordón
A
,
Orellana
A
,
Espinosa
A
, et al
.
Automatized FACEmemory® scoring is related to Alzheimer’s disease phenotype and biomarkers in early-onset mild cognitive impairment: the BIOFACE cohort
.
Alzheimers Res Ther
.
2022
;
14
(
1
):
43
.
27.
Vyshedskiy
A
,
Netson
R
,
Fridberg
E
,
Jagadeesan
P
,
Arnold
M
,
Barnett
S
, et al
.
Boston cognitive assessment (BOCA): a comprehensive self-administered smartphone- and computer-based at-home test for longitudinal tracking of cognitive performance
.
BMC Neurol
.
2022
;
22
(
1
):
92
.
28.
Dorociak
K
,
Mattek
N
,
Lee
J
,
Leese
M
,
Bouranis
N
,
Imtiaz
D
, et al
.
The survey for memory, attention, and reaction time (SMART): development and validation of a brief web-based measure of cognition for older adults
.
Gerontology
.
2021
;
67
(
6
):
740
52
.
29.
Iliadou
P
,
Paliokas
I
,
Zygouris
S
,
Lazarou
E
,
Votis
K
,
Tzovaras
D
, et al
.
A comparison of traditional and serious game-based digital markers of cognition in older adults with mild cognitive impairment and healthy controls
.
J Alzheimers Dis
.
2021
;
79
(
4
):
1747
59
.
30.
Zygouris
S
,
Iliadou
P
,
Lazarou
E
,
Giakoumis
D
,
Votis
K
,
Alexiadis
A
, et al
.
Detection of mild cognitive impairment in an at-risk group of older adults: can a novel self-administered serious game-based screening test improve diagnostic accuracy
.
J Alzheimers Dis
.
2020
;
25
:
78
.
31.
Fritch
HA
,
Moo
LR
,
Sullivan
MA
,
Thakral
PP
,
Slotnick
SD
.
Impaired cognitive performance in older adults is associated with deficits in item memory and memory for object features
.
Brain Cogn
.
2023
;
166
:
105957
.
32.
Papp
K
,
Rentz
D
,
Maruff
P
,
Sun
CK
,
Raman
R
,
Donohue
M
, et al
.
The computerized cognitive composite (C3) in an Alzheimer’s disease secondary prevention trial
.
J Prev Alzheimers Dis
.
2021
;
8
:
59
67
.
33.
Davoudi
A
,
Dion
C
,
Amini
S
,
Tighe
PJ
,
Price
CC
,
Libon
DJ
, et al
.
Classifying non-dementia and Alzheimer’s disease/vascular dementia patients using kinematic, time-based, and visuospatial parameters: the digital Clock drawing test
.
J Alzheimers Dis
.
2021
;
82
1
:
47
57
.
34.
Paddick
SM
,
Yoseph
M
,
Gray
WK
,
Andrea
D
,
Barber
R
,
Colgan
A
, et al
.
Effectiveness of app-based cognitive screening for dementia by lay health workers in low resource settings. A validation and feasibility study in rural Tanzania
.
J Geriatr Psychiatry Neurol
.
2021
;
34
(
6
):
613
21
.
35.
Stricker
N
,
Stricker
J
,
Karstens
A
,
Geske
J
,
Fields
J
,
Hassenstab
J
, et al
.
A novel computer adaptive word list memory test optimized for remote assessment: psychometric properties and associations with neurodegenerative biomarkers in older women without dementia
.
Alz Amp Dem Diag Ass Amp Dis Mo
.
2022
;
14
(
1
):
14
.
36.
Coughlan
G
,
Puthusseryppady
V
,
Lowry
E
,
Gillings
R
,
Spiers
H
,
Minihane
AM
, et al
.
Test-retest reliability of spatial navigation in adults at-risk of Alzheimer’s disease
.
PLoS One
.
2020
;
15
(
9
):
e0239077
.
37.
Papp
KV
,
Samaroo
A
,
Chou
HC
,
Buckley
R
,
Schneider
OR
,
Hsieh
S
, et al
.
Unsupervised mobile cognitive testing for use in preclinical Alzheimer’s disease
.
Alzheimers Dement
.
2021
;
13
(
1
):
e12243
.
38.
Binaco
R
,
Calzaretto
N
,
Epifano
J
,
McGuire
S
,
Umer
M
,
Emrani
S
, et al
.
Machine learning analysis of digital Clock drawing test performance for differential classification of mild cognitive impairment subtypes versus Alzheimer’s disease
.
J Int Neuropsychol Soc
.
2020
;
26
(
7
):
690
700
.
39.
Noguchi-Shinohara
M
,
Domoto
C
,
Yoshida
T
,
Niwa
K
,
Yuki-Nozaki
S
,
Samuraki-Yokohama
M
, et al
.
A new computerized assessment battery for cognition (C-ABC) to detect mild cognitive impairment and dementia around 5 min
.
PLoS One
.
2020
;
15
(
12
):
e0243469
.
40.
Rodriguez-Salgado
AM
,
Llibre-Guerra
JJ
,
Peñalver
AI
,
Tsoy
E
,
Bringas
G
,
Guerra
JCL
, et al
.
Brief digital cognitive assessment for detection of cognitive impairment in low- and middle-income countries
.
Alzheim Demen
.
2021
;
17
(
S10
):
e050888
.
41.
Scharre
DW
,
Chang
S
,
Nagaraja
HN
,
Wheeler
NC
,
Kataki
M
.
Self-Administered Gerocognitive Examination: longitudinal cohort testing for the early detection of dementia conversion
.
Alzheimers Res Ther
.
2021
;
13
(
1
):
192
.
42.
Ye
S
,
Sun
K
,
Huynh
D
,
Phi
H
,
Ko
B
,
Huang
B
, et al
.
A computerized cognitive test battery for detection of dementia and mild cognitive impairment: instrument validation study
.
JMIR Aging
.
2022
;
5
(
2
):
e36825
.
43.
Chan
JYC
,
Wong
A
,
Yiu
B
,
Mok
H
,
Lam
P
,
Kwan
P
, et al
.
Electronic cognitive screen technology for screening older adults with dementia and mild cognitive impairment in a community setting: development and validation study
.
J Med Internet Res
.
2020
;
22
(
12
):
e17332
.
44.
Oliva
I
,
Losa
J
.
Validation of the computerized cognitive assessment test: nnct
.
Int J Environ Res Public Health
.
2022
;
19
(
17
):
10495
.
45.
Visser
L
,
Dubbelman
M
,
Verrijp
M
,
Wanders
L
,
Pelt
S
,
Zwan
M
, et al
.
The Cognitive Online Self‐Test Amsterdam (COST‐A): establishing norm scores in a community‐dwelling population
.
Alz Dem Diag Ass Dis Mo
.
2021
;
13
(
1
):
13
.
46.
Perez-Valero
E
,
Gutierrez
CAM
,
Lopez-Gordo
MA
,
Alcalde
SL
.
Evaluating the feasibility of cognitive impairment detection in Alzheimer’s disease screening using a computerized visual dynamic test
.
J NeuroEng Rehabil
.
2023
;
20
(
1
):
43
.
47.
Kalafatis
C
,
Modarres
M
,
Apostolou
P
,
Marefat
H
,
Khanbagi
M
,
Karimi
H
, et al
.
Validity and cultural generalisability of a 5-minute AI-based, computerised cognitive assessment in mild cognitive impairment and Alzheimer’s dementia
.
Front Psychiatry
.
2021
;
12
:
706695
.
48.
Dos Santos
TTBA
,
de Carvalho
RLS
,
Nogueira
M
,
Baptista
M
,
Kimura
N
,
Lacerda
IB
, et al
.
The relationship between social cognition and executive functions in Alzheimer’s disease: a systematic review
.
Curr Alzheimer Res
.
2020
;
17
(
5
):
487
97
.
49.
Albert
M
,
DeKosky
S
,
Dickson
D
,
Dubois
B
,
Feldman
H
,
Fox
N
, et al
.
The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease
.
Alzheimers Dement
.
2011
;
7
(
3
):
270
9
.
50.
Sabbagh
MN
,
Boada
M
,
Borson
S
,
Doraiswamy
PM
,
Dubois
B
,
Ingram
J
, et al
.
Early detection of mild cognitive impairment (MCI) in an at-home setting
.
J Prev Alzheimers Dis
.
2020
;
7
(
3
):
171
8
.
51.
Cid
REC
,
Crocco
EA
,
Kitaigorodsky
M
,
Beaufils
L
,
Peña
P
,
Grau
GA
, et al
.
A novel computerized cognitive stress test to detect mild cognitive impairment
.
J Prev Alzheimers Dis
.
2021
;
8
:
135
41
.
52.
Doré
V
,
Krishnadas
N
,
Bourgeat
P
,
Huang
K
,
Li
S
,
Burnham
S
, et al
.
Relationship between amyloid and tau levels and its impact on tau spreading
.
Eur J Nucl Med Mol Imaging
.
2021
;
48
(
7
):
2225
32
.
53.
Jutten
RJ
,
Sikkes
SAM
,
Amariglio
RE
,
Buckley
RF
,
Properzi
MJ
,
Marshall
GA
, et al
.
Identifying sensitive measures of cognitive decline at different clinical stages of Alzheimer’s disease
.
J Int Neuropsychol Soc
.
2021
;
27
(
5
):
426
38
.
54.
Kawagoe
T
,
Matsushita
M
,
Hashimoto
M
,
Ikeda
M
,
Sekiyama
K
.
Face-specific memory deficits and changes in eye scanning patterns among patients with amnestic mild cognitive impairment
.
Sci Rep
.
2017
;
7
(
1
):
14344
.
55.
Derbie
AY
,
Dejenie
M
,
Zegeye
TG
.
Visuospatial representation in patients with mild cognitive impairment Implication for rehabilitation
.
Medicine
.
2022
;
101
(
44
):
e31462
.
56.
Talamonti
D
,
Koscik
R
,
Johnson
S
,
Bruno
D
.
Predicting early mild cognitive impairment with free recall: the primacy of primacy
.
Arch Clin Neuropsychol
.
2020
;
35
(
2
):
133
42
.
57.
Satoh
T
,
Sawada
Y
,
Saba
H
,
Kitamoto
H
,
Kato
Y
,
Shiozuka
Y
, et al
.
Assessment of mild cognitive impairment using CogEvo: a computerized cognitive function assessment tool
.
J Prim Care Community Health
.
2024
;
15
:
21501319241239228
.
58.
Livingston
G
,
Huntley
J
,
Sommerlad
A
,
Ames
D
,
Ballard
C
,
Banerjee
S
, et al
.
Dementia prevention, intervention, and care: 2020 report of the Lancet Commission
.
Lancet
.
2020
;
396
(
10248
):
413
46
.
59.
Trifilio
E
,
Bowers
D
.
NIH Toolbox cognition battery
. In:
Gu
D
,
Dupre
ME
, editors.
Encyclopedia of gerontology and population aging [internet]
.
Cham
:
Springer International Publishing
;
2019
. p.
1
6
.
60.
Geddes
M
,
O’Connell
M
,
Fisk
J
,
Gauthier
S
,
Camicioli
R
,
Ismail
Z
, et al
.
Remote cognitive and behavioral assessment: report of the Alzheimer Society of Canada Task Force on dementia care best practices for COVID-19
.
Alzheimers Dement (Amst)
.
2020
;
12
(
1
):
e12111
.