Abstract
Cognitive training (CT) shows modest positive effects on cognitive function in patients with Parkinson’s disease (PD). Gamification may enhance adherence to traditional CT, but this has not been studied yet. Here, we investigated the feasibility of a gamified CT. We performed a randomized controlled trial including PD patients with mild cognitive impairment. Participants were randomly allocated to a 12-week home-based gamified CT intervention or waiting-list control group. Assessments were performed at baseline and at weeks 12 and 24. Forty-one patients were included (21 intervention and 20 waiting-list controls). Sixty-three percent of the intervention group trained >50% of the recommended sessions, while 81% voluntarily continued training after 12 weeks. After 24 weeks, 87.5% graded the game to be satisfactory. Global cognition scores improved after 24 weeks. Home-based gamified CT shows acceptable feasibility in patients with PD, and we observed preliminary indications for efficacy. Larger trials are needed to establish this efficacy.
Introduction
Cognitive impairment is an important concern for patients with Parkinson’s disease (PD) and decreases the quality of life [1]. Moreover, cognitive impairment may progress to dementia in advanced disease stages, causing considerable caregiver burden and ultimately resulting in institutionalization [2]. Treatments are therefore needed, and also because early interventions might slow down progression to dementia [3]. Pharmacotherapy is not effective for mild cognitive impairment (MCI) and has only limited effects on PD dementia. Cholinesterase inhibitors can delay cognitive decline up to at least 6 months but increase the risk of adverse drug reactions [4]. Hence, non-pharmacological treatment for cognitive impairment might prove particularly meaningful. Recent work points to the possible beneficial effects of cognitive training (CT). CT appears to be safe and cost-effective in PD, with modest positive effects on executive functions and working memory [5]. Gamification may further enhance traditional CT in terms of attractiveness and adherence [6], thereby making patients more inclined to continue playing, increasing possible treatment effectiveness, and maintaining benefits over time. So far, the merits of gamified CT have never been investigated in PD patients. Our long-term aim is to perform a large randomized controlled trial to evaluate the impact of gamified CT on cognition in PD [7]. The focus of the present proof-of-principle study is to test the feasibility and adherence to gamified CT.
Patients and Methods
A complete description of the methods of this randomized controlled trial has been published previously [7]. PD patients were recruited at 3 centers in the Netherlands. Inclusion criteria were PD (diagnosed by a neurologist), Hoehn and Yahr stage ≤3, age between 40 and 75 years, MCI according to MDS (MCI Level 1 criteria), and stable dopaminergic medication during the last 3 months. Patients with dementia were excluded. Eligible patients were randomized to either the gamified CT or no intervention (waiting-list control). The intervention group was asked to perform the online CT game (named AquaSnapTM) at home using an internet browser for a recommended 3 weekly sessions of 30 min each, for at least 12 weeks (primary phase). Participants scheduled their own agenda, and each session duration was not fixed. Both groups could voluntarily play the gamified CT from weeks 12 to 24 (secondary phase). The waiting-list control group did not train in the primary phase but was allowed to train in the secondary phase. AquaSnapTM is an adaptive CT game exercising 5 cognitive domains: attention, working memory, episodic memory, psychomotor speed, and executive function. The player is an underwater photographer, exploring the ocean and completing cognitive tasks by taking pictures of fish. Pictures are worth currency and are used to progress into more difficult game levels. To promote adherence, various game elements are incorporated, such as goals, challenges/missions, reward systems, personalization, and 3D environments. To assess the feasibility of stand-alone CT, patients trained unsupervised at home and only reactive support was provided when requested by the patient. At baseline and after 12 and 24 weeks, a standard neuropsychological assessment battery including self-report questionnaires was administered in the ON medication state (Table 1), as well as an online cognitive assessment (MyCQTM). This 30-min MyCQTM assessment was included to automatically adapt the game difficulty level to the performance of the patient, thereby personalizing the intervention [8]. After completing a 12-week training period, participants completed an intervention acceptability questionnaire, in which they were asked to grade the overall intervention. The waiting-list control group completed this questionnaire 24 weeks after the randomization visit, whereas the intervention group received the questionnaire after 12 and 24 weeks. Assessors were not blinded to treatment allocation. Missing data were imputed prior to statistical analysis. To be able to replicate outcome measures and use a priori defined criteria, 4 feasibility outcome measures (accessibility, training compliance, technical smoothness, and training motivation) were derived from previous studies on computerized CT [9, 10]. The effect of gamified CT on global cognition at 12 and 24 weeks was analyzed using 2-tailed t tests for independent samples.
Results
Forty-one patients were included (21 intervention and 20 waiting-list controls). Baseline characteristics were similar between both groups, including cognitive status (Table 1). Three patients dropped out of the intervention group due to depressive symptoms (n = 1) or technical issues related to logging in and getting the game and assessment to work properly in their browsers (n = 2) (Table 1). One patient dropped out of the waiting-list control group for similar technical difficulties and 2 because of planned surgery.
During the primary phase from baseline to week 12, the mean number of sessions per week within the intervention group was 2.9 ± 3.4, with a mean duration of 89 ± 88 min/week (Table 1). In total, 35.5 ± 30.4 (range 17–120) sessions of the recommended 36 sessions (98.3%) were performed during the primary phase. Six participants (37.5%) performed more than 36 sessions, which was allowed, thereby increasing the average completion rate. When we cut off all number of sessions above 36 for these 6 participants, the intervention group performed on average 24.6 ± 11.3 (range 7–36) sessions (68.3%). Between week 5 and 8, the intervention group on average trained the required number of sessions, and this number decreased between week 17 and 20. In terms of duration, 11 participants (68.8%) trained more than 50% of the recommended training minutes in the primary phase, of which 7 participants trained more than 100% (see online suppl. Figure, www.karger.com/doi/10.1159/000509685). Two intervention participants did not start training, because technical issues disallowed them to train; both subjects were therefore left out of the study. In the secondary and voluntary phase from week 12 to 24, the intervention group trained 2.3 ± 2.6 sessions/week and 63 ± 71 min/week. Three participants stopped with performing additional sessions after the primary phase (18.8%).
Global cognition scores improved in the intervention group after 24 weeks of training compared to the waiting-list control group: 0.149 ± 0.275 and −0.175 ± 0.680, respectively (t = −1.96, p = 0.049, d = 0.63; Table 1). However, no significant difference between groups was observed at 12 weeks follow-up.
Feasibility outcome measures based on the intervention feasibility criteria by Verhelst et al. [10] are presented in Table 2. Most, but not all, criteria were met: (1) all intervention group participants understood the goal of the game, (2) the average training completion for the intervention group after 12 weeks was 68.2% (SD = 31.3, range 19.4–100%). Eight out of the 16 intervention participants carried out at least 86% of the sessions during the 12-week training intervention, while 3 participants (18.8%) completed less than 40% of the sessions. (3) In total, half of the participants encountered technical issues as a result of login difficulties and plug-in support difficulties. Although these issues were resolved to prevent future interruptions, some were critical, since they resulted in dropout (n = 2). (4) Most participants (87.5%) gave at least a “satisfactory” grade on the 10-point acceptability questionnaire, while only 1 (6.3%) gave an “unsatisfactory” grade. The mean acceptability questionnaire grade was 6.8 ± 1.2 (range 4–8) after 12 weeks of training. Twelve intervention participants (75%) would recommend this game to others. In the secondary phase, the proportion of participants from the intervention group reporting to like playing the game increased from 47 to 64%. In the waiting-list control group, which was allowed to play the game in the secondary phase only, the mean acceptability grade was 8.0 ± 0.9 (range 6–10), indicating they appreciated the intervention more than the intervention group.
Discussion
This study evaluated the feasibility of a home-based gamified CT (AquaSnapTM) in a group of PD patients with MCI. The results show an acceptable feasibility of this online CT approach. As such, it holds promise for a future larger clinical trial. In order to reflect on the feasibility of the study, different aspects of both the CT and the study design should be addressed.
When we reflect on the feasibility of our CT, we found that the training compliance in general was high for both the intervention and waiting-list control group. However, we observed a considerable variation in the number of participants completing the recommended number of trainings.
In our study, 68% of the required training sessions was completed after 12 weeks and 69% of the participants played more than half of the prescribed training duration. The observed variation might be explained by the relatively noncommittal nature of the training instructions. Although we recommended the participants to train 30 min per session for 3 weekly sessions, this instruction was not strict and the training sessions have not been closely checked by the research staff. Hence, participants may have felt free to play according to their own needs, resulting in considerable variation. In comparison, in a 5-week computerized cognitive intervention pilot study in 9 persons with Huntington’s disease, 77% started the training, and of those, 100% completed all training sessions [11]. In another study of 59 persons with multiple sclerosis, 71% started the training, and of those, 81% played more than half of the prescribed training sessions [12]. The results of the current study are in line with these observations, although the percentage of PD-MCI patients playing more than half of the prescribed training was a bit lower in our study. This could be due to differences in patient profiles and age. We have no indications that the lower training adherence was due to unclear game objectives, since all participants responded to understand the game objectives and rules and none of our participants required additional game instructions. Study participants received both an extensive in-person training of the game (at home) and paper support manuals. This approach can be relevant for clinical practice, since high treatment complexity and limited treatment knowledge are barriers for therapy compliance [13].
Similar to some other studies, many participants experienced technical issues [14] and these likely hindered adherence rates and therapy compliance. Making the gamified CT available as an app for smartphones and tablets will presumably decrease technical problems and increase usability. For our CT, this has meanwhile been adopted in a new version of the product. This is especially relevant since many MCI patients have compromised problem-solving skills [15] and therefore are less likely to successfully solve technical issues. In addition, the investigated (now obsolete) version of the game was designed to be engaging for up to 18 training hours, after which no additional content was added to the gameplay. Consequently, some participants indicated that the game, at some point, had become monotonous. This is reflected in our results: between week 17 and 20, the number of sessions started to decrease. Since prolonged periods of training or repeated booster sessions are likely beneficial for long-term cognitive effects [16], future gamified interventions should include a sufficient amount of gameplay variance (levels, challenges, missions, positive feedback elements, social challenges, unlockable content, and continuous updates) to achieve optimal engagement. Nevertheless, almost all participants gave a satisfactory overall grade for the game and 75% would recommend this game to others. The more positive feedback of the waiting-list control group could be related to different expectations generated after the initial waiting period, the nonbinding nature of the secondary phase, or the improved problem-solving skills of the support staff. Altogether, this indicates that this type of training can be considered relevant for this target population; however, it may only be suitable for a certain proportion of patients with PD-MCI. More research is needed to investigate for which portion of the PD-MCI population online CT training is most suitable.
In other populations (such as non-PD dementia or healthy older adults), gamification and personalization of therapies are felt to be motivating and attractive [17, 18], but firm empirical evidence is still lacking [19]. In order to increase adherence, the gamified CT used in this study incorporated an online cognition assessment (MyCQTM) that was used to adapt the training to the patients’ cognitive abilities. As such, the CT focused on personalized and engaging gameplay over longer periods of time. One of the most frequent reasons reported to decline participation in an intervention study is the need to travel to an institute for the assessments [9]. MyCQTM, when validated, can be particularly helpful in this regard, since cognitive assessments can be performed in the home situation. Moreover, our patients could also train at home at any time. However, remote supervision or technical assistance for patients training at home could be of specific relevance for this group of patients. Whereas many previous studies provided remote supervision [9, 20], we intentionally only provided reactive technical support to test the stand-alone feasibility and adherence. Participants could schedule their own training agenda, independently from research staff, and end sessions at the duration of their likings. On the other hand, the reactive technical assistance instead of proactive remote supervision may have had a negative impact on adherence, as observed in the considerable variation in number of training sessions and training duration. The advantages of remote supervision are better monitoring of progression, the possibility to give personalized feedback, and motivational interviewing to boost adherence [9, 11, 15]. Additionally, participants can be given a more structured and scheduled therapy with regard to timing of session numbers and durations. Thus, a blended approach combining personalized and engaging gamified CT in the home situation with low-threshold remote supervision seems to have great future potential.
Our primary goal was to test the feasibility of a gamified CT. Secondarily, we also explored potential effects and our analyses showed a significant improvement of global cognition scores after 24 weeks. This effect was not found after 12 weeks. The significant improvement increased in subgroup analysis (2-tailed t tests for independent samples), in which the intervention group was compared to a passive control group (e.g., only the participants in the waiting-list control group who were inactive in the secondary phase were included in the control group). This may suggest that gamified CT interventions should be executed a minimum (cumulative) amount of time to possibly affect cognition and that continuous training may be essential for prolonged cognitive benefits over time. Previous traditional or computerized CT programs also found small to modest effects of CT interventions on cognition in PD, mainly on measures of processing speed, working memory, and executive functions [5]. However, none of those used a home-based gamified and personalized CT, and most trials used heterogeneous cognitive outcomes and short follow-up periods, hampering a direct comparison with these studies. Although this proof-of-principle study included a relatively large sample of participant when compared to other CT interventions, larger studies with robust cognitive outcomes that also focus on translational aspects into daily life are needed to determine the effectiveness of this intervention. Additionally, trials should include blinded assessors, an active control group receiving a mock intervention, and longer follow-up periods.
In conclusion, this study shows that PD patients with MCI can perform an online, gamified, and unsupervised CT on a regular basis in their homes, with a potential effect on global cognition after 24 weeks of training. Intervention feasibility criteria are mostly met, but improvements are needed in order to increase usability. Even if online gamified CT may not be suitable for all patients, many PD patients are interested in such interventions [6], increasing the need to further investigate the efficacy of gamified CT in larger PD trials.
Acknowledgements
We thank Mirella Davies-Waber, Vienna Kooijman, Amée Wolters, Joni de Kriek, Ashley van Woerkom, Tiny Sporken, and Jessica Hubbers for their contribution in this study.
Statement of Ethics
All subjects gave their written informed consent, and the study protocol was approved by the institute’s committee on human research: METC azM Maastricht, number NL51188.068.14.
Conflict of Interest Statement
The authors have no conflicts of interest to declare.
Funding Sources
The Parkin’Play study was funded by MyCognition, London, UK. The sponsors had no role in the design of the study and collection, analysis, and interpretation of data, and in writing the manuscript.
Author Contributions
S.C.F.W., A.A.D., B.R.B., G.T., and M.L.K. were involved in the conception, organization, and execution of the research project. S.C.F.W. and S.K. were involved in the execution of the statistical analysis. R.P.C.K. and G.T. were involved in the organization of the research project. All authors were involved in the review of the analysis and the writing of the manuscript.