Over the last several years, there has been rapid growth of digital technologies attempting to transform healthcare. Unique features of digital medicine technology lead to both challenges and opportunities for testing and validation. Yet little guidance exists to help a health system decide whether to undertake a pilot test of new technology, move right to full-scale adoption or start somewhere in between. To navigate this complexity, this paper proposes an algorithm to help choose the best path toward validation and adoption. Special attention is paid to considering whether the needs of patients with limited digital skills, equipment (e.g., smartphones) and connectivity (e.g., data plans) have been considered in technology development and deployment. The algorithm reflects the collective experience of 20+ health systems and academic institutions that have established the Network of Digital Evidence for Health, NODE.Health, plus insights from existing clinical research taxonomies, syntheses or frameworks for assessing technology or for reporting clinical trials.

Over the last several years, there has been rapid growth of digital technologies aimed at transforming healthcare [1, 2]. Such innovations include patient-focused technologies (e.g., wearable activity tracking devices), other connected devices (glucometers, blood pressure cuffs, etc.) as well as the broader category of analytic applications used by health systems to monitor disease status, predict risk, make treatment decisions or monitor administrative performance. The emergence of nontraditional players in healthcare (e.g., Amazon and Google) [3, 4] is creating tremendous pressure on health systems to innovate. Yet little guidance exists to help a health system decide whether to undertake a pilot test of new technology, move right to full-scale adoption or somewhere in between [5].

Unique features of digital medicine technology lead to both challenges and opportunities for testing and validation. To navigate this complexity, this paper proposes an algorithm (Fig. 1) to help health systems determine the appropriate pathway to digital medicine validation or adoption. The algorithm was developed based on the collective experiences of 20+ health systems and academic institutions who have established the Network of Digital Evidence for Health, NODE.Health. In addition, evidence-based clinical research taxonomies, syntheses, or frameworks for assessing technology or for improving reporting of clinical trials identified from a literature review framed our work. Informed by Realist Synthesis methods [6, 7], the Immersion/Crystallization method [8] was used to understand how a product works by looking at the relationships among the product, the context, the user population, and outcomes. This approach is especially relevant to digital technology where complex interventions are continuously updated to reflect the user and context [9]. Concepts emerging from our literature review of contextual issues include classification of users[10‒12], use of technology for collecting outcomes [13], and factors associated with scale-up [14, 15]. Concepts related to understanding outcomes were found in the literature related to methods of usability testing [16, 17], research design [18‒22], product evaluation or efficacy testing [23‒26], and reporting of research [24, 27‒30]. To better understand the mechanism of action, we considered classification of technology [31‒35], patient engagement or technology adoption [36‒42], and reasons for nonuse of technology [43].

Fig. 1.

The digital health testing algorithm.

Fig. 1.

The digital health testing algorithm.

Close modal

To make effective use of the algorithm, some key features of digital medicine products and their development ecosystem are discussed below. The user-task-context (eUTC) framework provides a lens for considering choices or plans for technology testing and adoption [44]. First, the institution should define its use case(s) for the product – know the target population (users), the required functions (tasks), and understand what is required for successful adoption within a specific setting. In assessing the fit of a technology under consideration, one should consider whether a product has been shown to work in the target user population. User dimensions especially important to health technology adoption include age, language, education, technology skills, and access to underlying necessary equipment and connectivity (e.g., smartphones or computers and broadband or mobile data). Similarly, one must ensure that a technology successfully used in one setting was used for the same tasks as are needed in the target situation. For example, reports may show that a program performed well with scheduling in a comparable setting but that no evidence was provided about its use for referrals, the task required by the target setting. Finally, numerous dimensions of the setting could affect the likelihood of success; consider, for example, the physical location, the urgency of the situation [44] and the social context (e.g., whether researchers or clinicians are involved, and their motivations for doing so) [45].

The potential for a misalignment of goals and incentives of institutional customers and technology developers and vendors is a dimension of the social context that is unique to digital health products. For example, a developer may encourage pilot studies as a prelude to making a lucrative sale but ultimately be unable to execute a deal because the pilot testers lack the authority or oversight to commit to an enterprise-wide purchase [46]. Or, health systems might want to test modifications of existing solutions to increase impact, but the needed updates may not fit with the entrepreneur’s business plan. Finally, there must be adequate alignment between the institution and the product developer in terms of project goals, timeline, budget, capital requirements, and technical support.

The traditional clinical testing paradigm does not neatly apply to digital medicine because many digital solutions do not require regulatory oversight [47]. Mobile applications that pose a risk to patient safety [48], make medical claims, or are seeking medical grade certification [49] invoke Food and Drug Administration (FDA) oversight. For others, the agency reserves “enforcement discretion,” meaning it retains the right to regulate low-risk applications [50] but allows such products to be sold without explicit approval. Senior US FDA officials acknowledge that the current regulatory framework “is not well suited for software-based technologies, including mobile apps….” [51]. The agency is gradually issuing guidance to address mobile health applications that fall in the gray area. For example, the FDA recently stated that clinician-facing clinical decision support will not require regulation, but that applications with patient-facing decision support tools may be regulated [52]. In light of the shifting landscape and gaps in regulatory oversight of digital health application, we review some unique features of digital health technologies and then offer an adaptation of the traditional clinical trial phases to provide a common language for discussing testing of digital medicine products.

Digital health tools have some inherent advantages over traditional pharmaceutical and device solutions in that they may be commercially available at low cost, can undergo rapid iteration cycles, be rapidly disseminated through cell phones or the internet, can collect data on an ongoing rather than episodic basis, and can make data available in real time [53]. Digital tools can be highly adaptive for different populations, settings, and disease conditions. However, these very features create challenges for testing because digital tools can be used differently by different people, especially in the absence of training and standardization [53].

Much digital medicine technology is developed in the private sector where secrecy and speed trump the traditional norms of transparency and scientific rigor, especially in the absence of regulatory oversight. The ability to rapidly modify digital medicine software is essential for early-stage technology, but if algorithm updates are unbeknownst to evaluators, they can upend a study. On the other hand, in light of the average 7-year process to develop, test and publish results from a new technology, failure to update software could lead to obtaining results on an obsolete product [21, 53]. Digital products based on commercial off-the-shelf software are especial likely to undergo software changes versus digital solutions that are being built exclusively for a designated setting. Thus, it is important to understand where a technology is in its life cycle as well as to know whether the testers have a relationship with the developers whereby such background software changes would be disclosed or scheduled around the needs of the trial.

The type of company developing the digital technology can also affect testing dynamics. Start-up companies may have little research experience and limited capital to support operations at scale. Giants in the technology, consumer, transportation or communication sectors that are developing digital health tools may have the ability to conduct research at a massive scale [54, 55] (e.g., A/B testing by a social media company) but without the transparency typical of the established scientific method. Both types of companies may lack experience working with the health sector and thus be unfamiliar with bifurcated clinical and administrative leadership.

In light of the many unique dimensions of digital health technology, Table 1 presents questions to consider when initiating or progressing with a phased trial process. These questions have been informed by the experience of NODE.Health in validating digital medicine technologies [47, 56‒59], and by industry-sponsored surveys of digital health founders, health systems leaders, or technology developers [60, 61]; or other industry commentary [46, 62].

Table 1.

Special considerations for working with start-ups and nontraditionala firms

 Special considerations for working with start-ups and nontraditionala firms
 Special considerations for working with start-ups and nontraditionala firms

Table 2 proposes a pathway for testing digital health products that addresses their unique characteristics and dynamics. The testing pathway used for regulated products is shown on the left with parallel digital testing phases shown on the right half of Table 2.

Table 2.

Crosswalk of clinical and digital medicine testing phases

 Crosswalk of clinical and digital medicine testing phases
 Crosswalk of clinical and digital medicine testing phases

Preclinical studies, often performed in animals, are meant to explore the safety of a drug or devices prior to administration in humans [63]. The digital medicine analog is either absent or perhaps limited to determination of whether or not a product uses regulated technology such as cell phones that emit radiation.

Traditional and digital phase I studies may be the first studies of a drug or a technology in humans, are generally nonrandomized, and typically include a small number of healthy volunteers, or individuals with a defined condition. Whereas the purpose of traditional phase I studies is to test the agent’s safety and find the highest dose that can be tolerated by humans [63], the digital equivalent usually begins with a prototype product that may not be fully functioning. These early digital phase I studies increasingly use “design thinking” [16, 64] to identify relevant use cases and rapidly prototype solutions. Through focus group discussions, interviews, and surveys, developers find out what features people with the target condition want and conduct usability testing to see how initial designs work.

Traditional phase II studies look at how a drug works in the body of individuals with the disease or condition of interest. By using historical controls or randomization, investigators can get a preliminary indication of a drug’s effectiveness so that endpoints and the sample size needed for an efficacy trial can be identified [63]. These techniques may require that study volunteers be homogenous, limiting understanding of product use among the full range of target product users. The digital analogue could be a feasibility study in which an application is tested with individuals or in settings reflecting the targeted end users. Because nonregulated digital products are by definition at low or no risk of harm, phase II studies of digital products may involve larger numbers of subjects than traditional trials.

Phase III trials of traditional and digital products are typically randomized studies to determine the efficacy of the drug or digital intervention. Ideally, volunteers reflect the full range of individuals with the condition, but, at least with therapeutic and device trials, the study population is likely to be homogeneous with respect to variation in disease severity, comorbidities and age to control main sources of bias and maximize the ability to observe a significant difference between the treatment and control arms [65]. Digital studies may have fewer eligibility criteria resulting in studies that could be more broadly generalizable.

Because traditional efficacy trials are not likely large enough to detect rare adverse events, the FDA may require postmarketing (phase IV) studies to detect such events once a drug or device is in widespread use [65]. For low-risk applications, postmarketing surveillance studies may be unnecessary. Instead, we propose a class of studies that apply implementation science methods such as rapid cycle testing to generate new knowledge about technology dissemination beyond the efficacy testing environment and identification of unintended consequences in specific real-world settings. The trial may involve assessing requirements needed to integrate the novel product with existing technology, determine staffing requirements, training, and workflow changes. In this category, we also include strategies that allow for testing multiple questions in a short period of time, such as “n of one” and factorial studies [21]. Again, the Realist Synthesis perspective points to using these studies to best understand how and why a product does or does not work in specific settings [9].

Pragmatic trials are a next step common to both traditional and digital interventions; they seek to understand effectiveness in a real-world setting and with diverse populations [22, 66]. Pragmatic studies of digital technology could be undertaken completely outside of the context of healthcare settings such as with studies that recruit participants and deliver interventions through social media [67], that deliver interventions and collect data from biometric sensors and smartphone apps [68], or that market health-related services such as genetic testing directly to consumers [69]. The FDA’s move toward the use of real-world evidence and real-world data suggests that digital health tools used by consumers may assume greater importance for testing drugs and devices [65] as well as nonregulated products.

Many digital solutions can be widely adopted because they lack regulatory oversight and are commercially available to consumers. However, the specter of widespread utilization may magnify shortcomings of a testing process that may not have been conducted adequately in diverse settings and/or with desired groups of end users. The RE-AIM framework has been used since the late 1990s to evaluate the extent to which an intervention reaches, and is representative of, the affected population [70]. More recently, the framework has been used to plan and implement research studies [71] and to assess adoption of community-based healthcare initiatives [72]. Therefore, application of the RE-AIM framework to digital medicine testing can be especially valuable for those seeking to understand the likelihood that a specific digital health solution will work in a specific setting. Table 3 summarizes the original RE-AIM domains and then applies them to illuminate reliability and validity issues that could arise in testing digital medicine technologies.

Table 3.

RE-AIM dimension and key questions for testing decisions

 RE-AIM dimension and key questions for testing decisions
 RE-AIM dimension and key questions for testing decisions

The RE-AIM Dimensions and Key Questions

“Reach” concerns the extent to which populations involved in the testing reflect populations intended to use the product. “Effectiveness” concerns whether the product was effective overall or in subgroups typically defined by health status or age. Based on standards that have been articulated to improve the reporting of web-based and mobile health interventions [28] and reporting of health equity [29], we extend the traditional Reach and Effectiveness dimensions to highlight specific populations in whom digital medicine technologies might exhibit heterogeneity of effect due to “social disadvantage.” Traditional social determinants of health (education, race, income) plus age are highly associated with access to and use of the internet and smartphones. Digital skills are, in turn, highly associated with having and using computers and smartphones, as such skills develop from use [73], and usage grows with access to equipment and connectivity [74]. Digital skills, computers or smartphone, and mobile data or fixed broadband are essential for almost all consumer-facing health technology such as remote monitors and activity tracking devices. Given the covariance of digital skills and access with age, education, and income [75‒80], and the likelihood that those with low digital skills may resist adoption of a digital technology or may have challenges using it [56, 81‒84], it is especially important to determine whether such individuals were included in prior testing of a product and whether prior studies examined heterogeneity in response by such categories.

For products that have proven effectiveness, “Adoption” concerns the incorporation of the intervention into a given context after pilot testing. Stellefson’s application of the RE-AIM model to assessing the use of Web 2.0 interventions for chronic disease self-management in older adults is especially instructive for assessing how a healthcare institution moves to Adoption after a trial [85]. To what extent does the current setting replicate the context of previous trials? If a product was tested in an in-patient setting but is being considered for use in an ambulatory setting, for example, it is vital to understand who would need to be involved in a decision to move to adoption after efficacy testing. Do current staff have the skills and authority needed to integrate new technology? Where will funds for the technology adoption project come from?

“Implementation” refers to the types of support needed for the intervention to be administered and how well it was accepted by end users. “Maintenance” addresses the sustainability of the intervention in a setting and the duration of effect in the individual user.

We next apply the RE-AIM framework to help health systems assess evidence gaps and identify the most appropriate testing to use in a specific situation. For a product at an early stage of development, as shown in Table 4, questions of “Reach” are most relevant. Since early testing focuses on how well a device or application functions, there may be limited information available about how well the device functions among individuals or settings similar to those under consideration. For a technology that has completed an efficacy trial, it is important to understand the extent to which the test results would apply in different populations and settings. It is also crucial to understand features of the study such as whether careful instruction and close monitoring were necessary for the technology to function optimally. For a technology with ample evidence of efficacy in relevant populations and settings, those considering moving to widespread adoption must understand institutional factors associated with successful deployment such as engagement of marketing and IT departments to integrate data systems.

Table 4.

Application of RE-AIM framework to digital medicine testing by phase

 Application of RE-AIM framework to digital medicine testing by phase
 Application of RE-AIM framework to digital medicine testing by phase

The Digital Health Testing Algorithm

The digital health testing algorithm (Fig. 1) presents a series of questions to help a health system determine what sort of testing, if any, may be needed when considering adoption of a new technology.

A healthcare system was approached by a start-up company that had designed a new care coordination application that uses artificial intelligence to predict readmission risk for a particular condition. The tool simplifies the clinical workflow by pulling multiple data sources together into a single dashboard. For this first attempt at use in a clinical setting, an ambulatory care practice was asked to check the electronic medical record to validate that the product correctly aligned data from multiple systems. With that validation step complete and successful, the start-up company would like to embark on a small prospective phase II study to validate the accuracy of the prediction model. Is that the right next step?

The initial study was a phase I study that showed that the data mapping process worked correctly using data pertinent to an ambulatory care setting. Further phase I testing should be done to ensure that the data mapping process works more broadly, such as with in-patient data. Then, to validate the algorithm, the company could apply the algorithm to historic data and see how well it predicts known outcomes. This would still be part of phase I, simply ensuring that the product works as intended. Once the key functions are confirmed to function, then additional qualitative studies should be undertaken to ensure that the product meets the needs of users. That process should lead to additional customization, based on real-time healthcare system feedback. Once that process is complete, they can move to phase II testing. For the first actual use in the intended clinical environment, the health system leadership might identify a local clinical champion who would be thoughtful and astute about how to integrate the product into the clinical workflow. Health system Information Technology staff should be closely involved to ensure a smooth flow of data for the initial deployment. From this stage, utilization data should be examined closely, and staff should be debriefed to determine whether it was used. Some initial signals that the product produces the desired outcome should be seen before proceeding to a phase III efficacy trial.

An independent medical practice is approached by a start-up company that has developed a mobile application to simplify management of a particular condition. As no clinical advice is involved, the application is not regulated by the FDA. The product has been used for 1 year in three other small- to medium-size practices. Feedback from the practices and patients was used to tweak the product. A pilot study with the improved product showed that satisfaction and utilization were very high among patients who were able to download the app onto their phones. Hospitalizations were lower and disease control better among patients who used the application compared with nonusers, but the results did not reach statistical significance. The start-up company has a Spanish language version of the product that was developed with input from native Spanish-speaking employees at the start-up company. They would like to conduct a phase III efficacy trial with a randomized design, using both English and Spanish language versions of the tool. Is that the right next step?

This product has completed phase II testing that showed efficacy signals among those who used the tool heavily. Before proceeding to an efficacy test in a comparable population, one must look carefully to see how patients who used the product differed from those who used it only a bit or not at all. If users are healthier and better educated, they may have been better able to manage their conditions even without the application. Qualitative assessments from staff and patients should be undertaken to ascertain why others did not use the product. If new functions are needed, they should be developed and tested in phase I studies. If operational barriers can be identified and addressed, rapid cycle testing could be undertaken, and then a repeat phase II study with the population that initially failed to use the product. In considering initiation of a trial in the new clinic, phase I studies must be undertaken to ensure that the product meets the needs of very different populations – including lower income, non-English-speaking, and patients with only rudimentary smartphones and limited skill in using apps. Then, a phase II study should be initiated to see whether the target population will use the app and whether efficacy signals are seen.

A mid-size faith-based community healthcare system serving a largely at-risk population has implemented a disease-specific digital chronic care management and patient engagement program at two of its larger facilities. The deployment went well, and data strongly show that the application has positive outcomes. The system would like to roll the product out at all 20 of its facilities, using a pragmatic design to capture data on the efficacy of the product in real-world use. Should they move straight to a pragmatic trial?

In this case, 4 of the 20 additional clinics have newly joined this health system; various administrative systems are still being transitioned. If the health system has an excellent understanding of the contextual factors associated with successful implementation, then they may proceed with a pragmatic trial. For example, they may see that significant additional staffing is required to manage patient calls and messages that come through the app. They might then incorporate a call center to efficiently manage patient interactions at scale. However, if such factors are unknown, they should undertake rapid cycle testing, perhaps deploying the system in just one of the new clinics, and using qualitative research methods to understand factors associated with successful adoption. The focus of the rapid cycle tests and the pragmatic trial is on testing methods of scaling the product, rather than on testing the product itself.

Digital solutions developed by technology companies and start-ups pose unique challenges for healthcare settings due to potential for lack of alignment of goals between a health system and a technology company, the potential for unmeasured heterogeneity of effectiveness, and the need to understand institutional factors that may be crucial for successful adoption. Taxonomies and frameworks from public health and from efforts to improve the quality of clinical research publications lead to a set of questions that can be used to assess existing data and chose a testing pathway designed to ensure that products will be effective with the target populations and in target settings.

Some real-world examples [86, 87] illustrate the need for attention to the fit of a technology to the needed tasks, the user population, and the context of adoption.

Thies et al. [86] describe a “failed effectiveness trial” of a commercially available mHealth app designed to improve clinical outcomes for adult patients with uncontrolled diabetes and/or hypertension. Although tested very successfully in “the world’s leading diabetes research and clinical care organization” and at one of the nation’s leading teaching and research hospitals [88], use in a Federally Qualified Health Center failed for reasons clearly related to the “context” and the “users.” Patients reported a lack of interest in the app as “getting their chronic condition in better control did not rise to the level of urgency” for patients that it held with the clinic staff. Lack of access to and comfort with technology “may have reduced the usability of the app for patients who enrolled and dissuaded patients who were not interested.” Contextual factors included difficulty downloading the app due to limited internet access in the clinic, lack of clinic staff time to explain app use to patients, and lack of integration into clinic workflow since the app was not connected with the EHR [86].

In a pragmatic trial, 394 primary care patients living in West Philadelphia were called 2 days before their appointments with offers of Lyft rides. There were no differences in missed visits or in 7-day ED visits for those offered rides compared with those whose appointments were on different days where rides were not offered. Investigators had only been able to reach 73% of patients on the phone to offer rides, of whom only 36.1% were interested in receiving the offered Lyft ride. Hundreds of health systems have now deployed ride sharing services, but evidence of the impact has not been reported [87]. A considerable body of qualitative research may be needed to determine the best ways to deploy a resource that makes great sense intuitively but may not be received by users as envisioned.

Although far from complete, the proposed algorithm attempts to provide some structure and common terminology to help health systems efficiently and effectively test and adopt digital health solutions. In addition, a framework analogous to clinical trials.gov is needed to track and learn from the evaluation of digital health solutions that do not require FDA oversight.

The authors would like to thank Hannah Johnson, Kellie Breuning, Stephanie Muci, Rishab Shah, Connor Mauriello and Kelsey Krach for research assistance, and Nitin Vaswani, MD, MBA, the NODE.Health Program Director for helpful comments. NODE.Health is a consortium of healthcare systems and academic institutions that aims to build and promote evidence for digital medicine and care transformation.

Brian Van Winkle is Executive Director of NODE.Health. Amy R. Sheon and Yauheni Solad are volunteer members of the NODE.Health Board of Directors. Ashish Atreja is the founder of NODE.Health.

Amy R. Sheon’s work was supported by the Clinical and Translational Science Collaborative of Cleveland (CTSC) under grant number UL1TR000439 from the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health and NIH roadmap for Medical Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. NODE.Health receives industry funding to evaluate the clinical effectiveness of digital solutions. NODE.Health is also developing a registry to track the reporting of digital health solutions to encourage greater transparency and better knowledge.

1.
Meskó
B
,
Drobni
Z
,
Bényei
É
,
Gergely
B
,
Győrffy
Z
. Digital health is a cultural transformation of traditional healthcare. mHealth [Internet]
2017
[cited 2018 Aug 2];3. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5682364/
2.
Topol
E
.
The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care. First Trade Paper Edition, Revised and Expanded edition
.
New York
:
Basic Books
;
2013
.
3.
EHRIntelligence. How Google, Microsoft, Apple Are Impacting EHR Use in Healthcare [Internet]. EHRIntelligence
2018
[cited 2018 Jun 19];Available from: https://ehrintelligence.com/news/how-google-microsoft-apple-are-impacting-ehr-use-in-healthcare
4.
Report
S
. AI voice assistants have officially arrived in healthcare [Internet]. Healthc. IT News
2018
[cited 2018 Feb 9];Available from: http://www.healthcareitnews.com/news/special-report-ai-voice-assistants-have-officially-arrived-healthcare
5.
The Lancet
.
The Lancet null. Is digital medicine different
.
Lancet Lond Engl
.
2018
;
392
(
10142
):
95
.
6.
Pawson
R
.
Evidence-based Policy: A Realist Perspective
.
London
:
Sage Publications
;
2006
.
7.
Pawson
R
,
Greenhalgh
T
,
Harvey
G
,
Walshe
K
.
Realist review—a new method of systematic review designed for complex policy interventions
.
J Health Serv Res Policy
.
2005
Jul
;
10
(
1_suppl
Suppl 1
):
21
34
.
[PubMed]
1355-8196
8.
Borkan j. Immersion/Crystallization. In: Doing Qualitative Research:2nd (Second) edition. Sage;
1999
.
9.
Otte-Trojel
T
,
de Bont
A
,
Rundall
TG
,
van de Klundert
J
. How outcomes are achieved through patient portals: a realist review. J Am Med Inform Assoc [Internet]
2014
[cited 2018 Sep 7];21(4):751–7. Available from: https://academic.oup.com/jamia/article/21/4/751/764620
10.
Parasuraman
A
,
Colby
CL
. An Updated and Streamlined Technology Readiness Index: TRI 2.0. J Serv Res [Internet]
2015
[cited 2018 May 28];18(1):59–74. Available from: https://doi.org/
11.
Najaftorkaman
M
,
Ghapanchi
AH
,
Talaei-Khoei
A
,
Ray
P
. A Taxonomy of Antecedents to User Adoption of Health Information Systems: A Synthesis of Thirty Years of Research. J Assoc Inf Sci Technol [Internet]
2015
[cited 2016 Aug 1];66(3):576–98. Available from: http://onlinelibrary.wiley.com/doi/10.1002/asi.23181/abstract
12.
Kayser
L
,
Kushniruk
A
,
Osborne
RH
,
Norgaard
O
,
Turner
P
. Enhancing the Effectiveness of Consumer-Focused Health Information Technology Systems Through eHealth Literacy: A Framework for Understanding Users’ Needs. JMIR Hum Factors [Internet]
2015
[cited 2018 Aug 30];2(1):e9. Available from: http://humanfactors.jmir.org/2015/1/e9/
13.
Perry
B
,
Herrington
W
,
Goldsack
JC
,
Grandinetti
CA
,
Vasisht
KP
,
Landray
MJ
, et al
Use of Mobile Devices to Measure Outcomes in Clinical Research, 2010–2016: A Systematic Literature Review. Digit Biomark [Internet]
2018
[cited 2018 May 25];2(1):11–30. Available from: https://www.karger.com/Article/FullText/486347
14.
Dyk
V
. Liezl. A Review of Telehealth Service Implementation Frameworks. Int J Environ Res Public Health [Internet]
2014
[cited 2018 May 9];11(2):1279–98. Available from: http://www.mdpi.com/1660-4601/11/2/1279
15.
Granja
C
,
Janssen
W
,
Johansen
MA
. Factors Determining the Success and Failure of eHealth Interventions: Systematic Review of the Literature. J Med Internet Res [Internet]
2018
[cited 2018 May 2];20(5):e10235. Available from: http://www.jmir.org/2018/5/e10235/
16.
Mummah
SA
,
Robinson
TN
,
King
AC
,
Gardner
CD
,
Sutton
S
. IDEAS (Integrate, Design, Assess, and Share): A Framework and Toolkit of Strategies for the Development of More Effective Digital Interventions to Change Health Behavior. J Med Internet Res [Internet]
2016
[cited 2018 May 28];18(12):e317. Available from: http://www.jmir.org/2016/12/e317/
17.
Heffernan
KJ
,
Chang
S
,
Maclean
ST
,
Callegari
ET
,
Garland
SM
,
Reavley
NJ
, et al
.
Guidelines and Recommendations for Developing Interactive eHealth Apps for Complex Messaging in Health Promotion
[Internet]
.
JMIR Mhealth Uhealth
.
2016
Feb
;
4
(
1
):
e14
. Available from: http://mhealth.jmir.org/2016/1/e14/
[PubMed]
2291-5222
18.
Bradway
M
,
Carrion
C
,
Vallespin
B
,
Saadatfard
O
,
Puigdomènech
E
,
Espallargues
M
, et al
mHealth Assessment: Conceptualization of a Global Framework. JMIR MHealth UHealth [Internet]
2017
[cited 2018 May 28];5(5):e60. Available from: https://mhealth.jmir.org/2017/5/e60/
19.
Mason
VC
. Innovation for Medically Vulnerable Populations, Part 3: Experiment Design [Internet]. Tincture
2016
[cited 2018 Feb 27];Available from: https://tincture.io/innovation-for-medically-vulnerable-populations-part-3-experiment-design-47c8c7b00919#.r6wt2gwlb
20.
Malikova
MA
.
Optimization of protocol design: a path to efficient, lower cost clinical trial execution.
Future Sci OA [Internet]
2016
[cited 2018 Aug 31];2(1). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5137936/
21.
Baker
TB
,
Gustafson
DH
,
Shah
D
.
How Can Research Keep Up With eHealth?
Ten Strategies for Increasing the Timeliness and Usefulness of eHealth Research. J Med Internet Res [Internet]
2014
[cited 2018 May 28];16(2):e36. Available from: http://www.jmir.org/2014/2/e36/
22.
Porzsolt
F
,
Rocha
NG
,
Toledo-Arruda
AC
,
Thomaz
TG
,
Moraes
C
,
Bessa-Guerra
TR
, et al
.
Efficacy and effectiveness trials have different goals, use different tools, and generate different messages.
Pragmatic Obs Res [Internet]
2015
[cited 2018 Aug 13];6:47–54. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5045025/
23.
Philpott
D
,
Guergachi
A
,
Keshavjee
K
.
Design and Validation of a Platform to Evaluate mHealth Apps
.
Stud Health Technol Inform
.
2017
;
235
:
3
7
.
[PubMed]
0926-9630
24.
van Gemert-Pijnen
JE
,
Nijland
N
,
van Limburg
M
,
Ossebaard
HC
,
Kelders
SM
,
Eysenbach
G
, et al
A Holistic Framework to Improve the Uptake and Impact of eHealth Technologies. J Med Internet Res [Internet]
2011
[cited 2018 May 9];13(4):e111. Available from: http://www.jmir.org/2011/4/e111/
25.
Monitoring and Evaluating Digital Health Interventions
. A practical guide to conducting research and assessment [Internet]. [cited 2018 Feb 5]. Available from: https://docs.google.com/forms/d/e/1FAIpQLSerob1hgxR0w4W5PeyjAdjQUz53QYatp_bbFr5dIb9vCHHB-w/viewform?c=0&w=1&usp=embed_facebook
26.
Fanning
J
,
Mullen
SP
,
McAuley
E
.
Increasing physical activity with mobile devices: a meta-analysis
[Internet]
.
J Med Internet Res
.
2012
Nov
;
14
(
6
):
e161
. Available from: http://www.jmir.org/2012/6/e161/
[PubMed]
1438-8871
27.
Agarwal
S
,
LeFevre
AE
,
Lee
J
,
L’Engle
K
,
Mehl
G
,
Sinha
C
, et al
Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. BMJ [Internet]
2016
[cited 2017 Dec 15];352:i1174. Available from: http://www.bmj.com/content/352/bmj.i1174
28.
Eysenbach
G
. Group C-E. CONSORT-EHEALTH: Improving and Standardizing Evaluation Reports of Web-based and Mobile Health Interventions. J Med Internet Res [Internet]
2011
[cited 2018 May 28];13(4):e126. Available from: https://www.jmir.org/2011/4/e126/
29.
Welch
VA
,
Norheim
OF
,
Jull
J
,
Cookson
R
,
Sommerfelt
H
,
Tugwell
P
, et al
CONSORT-Equity 2017 extension and elaboration for better reporting of health equity in randomised trials. BMJ [Internet]
2017
[cited 2018 May 28];359:j5085. Available from: https://www.bmj.com/content/359/bmj.j5085
30.
Eldridge
SM
,
Chan
CL
,
Campbell
MJ
,
Bond
CM
,
Hopewell
S
,
Thabane
L
, et al
.
CONSORT 2010 statement: extension to randomised pilot and feasibility trials.
The BMJ [Internet]
2016
[cited 2018 Sep 4];355. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5076380/
31.
Hors-Fraile
S
,
Rivera-Romero
O
,
Schneider
F
,
Fernandez-Luque
L
,
Luna-Perejon
F
,
Civit-Balcells
A
, et al
.
Analyzing recommender systems for health promotion using a multidisciplinary taxonomy: A scoping review.
Int J Med Inf [Internet]
2018
[cited 2018 Jun 25];114:143–55. Available from: http://www.sciencedirect.com/science/article/pii/S1386505617304690
32.
Basher
KM
,
Nieto-Hipolito
JI
,
Leon
MD
,
Vazquez-Briseno
M
,
López
J DS
,
Mariscal
RB
.
Major Existing Classification Matrices and Future Directions for Internet of Things.
Adv Internet Things [Internet]
2017
[cited 2018 Mar 26];07(04):112. Available from: http://www.scirp.org/journal/PaperInformation.aspx?PaperID=79369#abstract
33.
WHO
. | Classification of digital health interventions v1.0 [Internet]. WHO [cited 2018 Apr 5];Available from: http://www.who.int/reproductivehealth/publications/mhealth/classification-digital-health-interventions/en/
34.
Donnelly
JE
,
Jacobsen
DJ
,
Whatley
JE
,
Hill
JO
,
Swift
LL
,
Cherrington
A
, et al
.
Nutrition and physical activity program to attenuate obesity and promote physical and metabolic fitness in elementary school children
.
Obes Res
.
1996
May
;
4
(
3
):
229
43
.
[PubMed]
1071-7323
35.
Wang
A
,
An
N
,
Lu
X
,
Chen
H
,
Li
C
,
Levkoff
S
.
A classification scheme for analyzing mobile apps used to prevent and manage disease in late life
[Internet]
.
JMIR Mhealth Uhealth
.
2014
Feb
;
2
(
1
):
e6
. Available from: http://mhealth.jmir.org/2014/1/e6/
[PubMed]
2291-5222
36.
Singh
K
. M.D, David Bates MD. Developing a Framework for Evaluating the Patient Engagement, Quality, and Safety of Mobile Health Applications [Internet].
2016
[cited 2018 Mar 27];Available from: http://www.commonwealthfund.org/publications/issue-briefs/2016/feb/evaluating-mobile-health-apps
37.
Barello
S
,
Triberti
S
,
Graffigna
G
,
Libreri
C
,
Serino
S
,
Hibbard
J
, et al
eHealth for Patient Engagement: A Systematic Review. Front Psychol [Internet]
2016
[cited 2018 Apr 20];6. Available from: http://journal.frontiersin.org/Article/10.3389/fpsyg.2015.02013/abstract
38.
Abelson
J
,
Wagner
F
,
DeJean
D
,
Boesveld
S
,
Gauvin
FP
,
Bean
S
, et al
.
Public and patient involvement in health technology assessment: a framework for action.
Int J Technol Assess Health Care [Internet]
2016
[cited 2018 Apr 19];32(4):256–64. Available from: https://www.cambridge.org/core/journals/international-journal-of-technology-assessment-in-health-care/article/public-and-patient-involvement-in-health-technology-assessment-a-framework-for-action/A6CCA37AECD7DBF1189ECC028AE80903
39.
Or
CK
,
Karsh
BT
. A Systematic Review of Patient Acceptance of Consumer Health Information Technology. J Am Med Inform Assoc [Internet]
2009
[cited 2018 Feb 28];16(4):550–60. Available from: https://academic.oup.com/jamia/article/16/4/550/765446
40.
Kruse
CS
,
DeShazo
J
,
Kim
F
,
Fulton
L
.
Factors Associated With Adoption of Health Information Technology: A Conceptual Model Based on a Systematic Review.
JMIR Med Inform [Internet]
2014
[cited 2018 Feb 5];2(1):e9. Available from: http://medinform.jmir.org/2014/1/e9/
41.
Karnoe
A
,
Furstrand
D
,
Christensen
KB
,
Norgaard
O
,
Kayser
L
. Assessing Competencies Needed to Engage With Digital Health Services: Development of the eHealth Literacy Assessment Toolkit. J Med Internet Res [Internet]
2018
[cited 2018 May 10];20(5):e178. Available from: http://www.jmir.org/2018/5/e178/
42.
Sawesi
S
,
Rashrash
M
,
Phalakornkule
K
,
Carpenter
SJ
,
Jones
FJ
.
The Impact of Information Technology on Patient Engagement and Health Behavior Change: A Systematic Review of the Literature.
JMIR Med Inf [Internet]
2016
;4(1):e1. Available from: http://medinform.jmir.org/2016/1/e1/
43.
Greenhalgh
T
,
Wherton
J
,
Papoutsi
C
,
Lynch
J
,
Hughes
G
,
A’Court
C
, et al
Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. J Med Internet Res [Internet]
2017
[cited 2018 Apr 17];19(11):e367. Available from: http://www.jmir.org/2017/11/e367/
44.
Kushniruk
A
,
Turner
P
.
A framework for user involvement and context in the design and development of safe e-Health systems
.
Stud Health Technol Inform
.
2012
;
180
:
353
7
.
[PubMed]
0926-9630
45.
Trivedi
MC
,
Khanum
MA
.
Role of context in usability evaluations: A review.
Adv Comput Int J [Internet]
2012
[cited 2018 Aug 26];3(2):69–78. Available from: http://arxiv.org/abs/1204.2138
46.
Baum
S
. How can digital health startups steer clear of pilot study pitfalls? [Internet]. Medcity News
2018
[cited 2018 Aug 17];Available from: https://medcitynews.com/2018/03/how-can-digital-health-startups-steer-clear-of-pilot-study-pitfalls/?rf=1
47.
Makhni
S
,
Atreja
A
,
Sheon
A
,
Van Winkle
B
,
Sharp
J
,
Carpenter
N
.
The Broken Health Information Technology Innovation Pipeline: A Perspective from the NODE Health Consortium
[Internet]
.
Digit Biomark
.
2017
;
1
(
1
):
64
72
. Available from: http://www.karger.com/DOI/10.1159/0004790172504-110X
48.
Applications
MM
. Guidance for Industry and Food and Drug Administration Staff [Internet]. U.S. Food and Drug Administration;
2015
[cited 2016 Mar 29]. Available from: http://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM263366.pdf
49.
Sartor
F
,
Papini
G
,
Cox
LG
,
Cleland
J
. Methodological Shortcomings of Wrist-Worn Heart Rate Monitors Validations. J Med Internet Res [Internet]
2018
[cited 2018 Jul 2];20(7):e10108. Available from: http://www.jmir.org/2018/7/e10108/
50.
Larson
RS
.
A Path to Better-Quality mHealth Apps.
JMIR MHealth UHealth [Internet]
2018
[cited 2018 Aug 23];6(7):e10414. Available from: http://mhealth.jmir.org/2018/7/e10414/
51.
Shuren
J
,
Patel
B
,
Gottlieb
S
.
FDA Regulation of Mobile Medical Apps.
JAMA [Internet]
2018
[cited 2018 Aug 13];320(4):337–8. Available from: https://jamanetwork.com/journals/jama/fullarticle/2687221
52.
Clinical and patient decision support software: Draft guidance for industry and Food and Drug Administration Staff [Internet]. U.S. Food and Drug Administration;
2017
[cited 2018 Aug 23]. Available from: https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM587819.pdf
53.
Kumar
S
,
Nilsen
WJ
,
Abernethy
A
,
Atienza
A
,
Patrick
K
,
Pavel
M
, et al
.
Mobile health technology evaluation: the mHealth evidence workshop
.
Am J Prev Med
.
2013
Aug
;
45
(
2
):
228
36
.
[PubMed]
0749-3797
54.
Kramer
AD
,
Guillory
JE
,
Hancock
JT
.
Experimental evidence of massive-scale emotional contagion through social networks.
Proc Natl Acad Sci [Internet]
2014
[cited 2018 Sep 7];111(24):8788–90. Available from: http://www.pnas.org/content/111/24/8788
55.
Cobb
NK
,
Graham
AL
.
Health Behavior Interventions in the Age of Facebook.
Am J Prev Med [Internet]
2012
[cited 2018 Sep 7];43(5):571–2. Available from: http://www.sciencedirect.com/science/article/pii/S0749379712005326
56.
Sheon
AR
,
Bolen
SD
,
Callahan
B
,
Shick
S
,
Perzynski
AT
.
Addressing Disparities in Diabetes Management Through Novel Approaches to Encourage Technology Adoption and Use.
JMIR Diabetes [Internet]
2017
[cited 2017 Jul 20];2(2):e16. Available from: http://diabetes.jmir.org/2017/2/e16/
57.
Winkle
BV
,
Carpenter
N
. Moscucci and M. Why Aren’t Our Digital Solutions Working for Everyone? AMA J Ethics [Internet]
2017
[cited 2017 Nov 6];19(11):1116. Available from: http://journalofethics.ama-assn.org/2017/11/stas2-1711.html
58.
Atreja
A
,
Khan
S
,
Otobo
E
,
Rogers
J
,
Ullman
T
,
Grinspan
A
, et al
Impact of Real World Home-Based Remote Monitoring on Quality of Care and Quality of Life in IBD Patients: Interim Results of Pragmatic Randomized Trial. Gastroenterology [Internet]
2017
[cited 2017 Sep 10];152(5):S600–S601. Available from: http://www.gastrojournal.org/article/S0016-5085(17)32145-5/abstract
59.
Atreja
A
,
Daga
N
,
Patel
NP
,
Otobo
E
,
Rogers
J
,
Patel
K
, et al
Sa1426 Validating an Automated System to Identify Patients With High Risk Lesions Requiring Surveillance Colonoscopies: Implications for Accountable Care and Colonoscopy Bundle Payment. Gastrointest Endosc [Internet]
2015
[cited 2017 Sep 10];81(5):AB210. Available from: http://www.giejournal.org/article/S0016-5107(15)00419-8/abstract
60.
Evans
B
,
Shiao
S
. Streamlining Enterprise Sales in Digital Health [Internet]. Rock Health;
2018
[cited 2018 Aug 14]. Available from: https://rockhealth.com/reports/streamlining-enterprise-sales-in-digital-health/
61.
Validic. Insights on Digital Health Technology Survey 2016: How digital health devices and data impact clinical trials [Internet]. Validic;
2016
[cited 2018 Aug 19]. Available from: https://hitconsultant.net/2016/09/19/validic-digital-health-devices-report/
62.
Surve
S
. How to survive IRBs and have successful digital health pilots [Internet]. Medcity News
2017
[cited 2018 Aug 19];Available from: https://medcitynews.com/2017/11/survive-irbs-successful-digital-health-pilots/?rf=1
63.
Umscheid
CA
,
Margolis
DJ
,
Grossman
CE
.
Key Concepts of Clinical Trials: A Narrative Review.
Postgrad Med [Internet]
2011
[cited 2018 Aug 11];123(5):194–204. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3272827/
64.
Dorst
K
. The core of ‘design thinking’ and its application. Des Stud [Internet]
2011
[cited 2017 Sep 7];32(6):521–32. Available from: http://www.sciencedirect.com/science/article/pii/S0142694X11000603
65.
Corrigan-Curay
J
,
Sacks
L
,
Woodcock
J
.
Real-World Evidence and Real-World Data for Evaluating Drug Safety and Effectiveness.
JAMA [Internet]
2018
[cited 2018 Aug 13];Available from: https://jamanetwork.com/journals/jama/fullarticle/2697359
66.
Ford
I
,
Norrie
J.
Pragmatic Trials. N Engl J Med [Internet]
2016
[cited 2018 Aug 24];375(5):454–63. Available from: https://doi.org/
67.
Napolitano
MA
,
Whiteley
JA
,
Mavredes
MN
,
Faro
J
,
DiPietro
L
,
Hayman
LL
, et al
Using social media to deliver weight loss programming to young adults: Design and rationale for the Healthy Body Healthy U (HBHU) trial. Contemp Clin Trials [Internet]
2017
[cited 2018 Sep 7];60:1–13. Available from: http://www.sciencedirect.com/science/article/pii/S1551714416305146
68.
Chen
T
,
Zhang
X
,
Jiang
H
,
Asaeikheybari
G
,
Goel
N
,
Hooper
MW
, et al
.
Are you smoking? Automatic alert system helping people keep away from cigarettes.
Smart Health [Internet]
2018
[cited 2018 Aug 6];Available from: http://www.sciencedirect.com/science/article/pii/S2352648318300436
69.
Tung
JY
,
Shaw
RJ
,
Hagenkord
JM
,
Hackmann
M
,
Muller
M
,
Beachy
SH
, et al
.
Accelerating Precision Health by Applying the Lessons Learned from Direct-to-Consumer Genomics to Digital Health Technologies.
NAM Perspect [Internet]
2018
[cited 2018 Sep 7];Available from: https://nam.edu/accelerating-precision-health-by-applying-the-lessons-learned-from-direct-to-consumer-genomics-to-digital-health-technologies/
70.
Glasgow
RE
,
Vogt
TM
,
Boles
SM
.
Evaluating the public health impact of health promotion interventions: the RE-AIM framework.
Am J Public Health [Internet]
1999
[cited 2009 Aug 22];89(9):1322–1327. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1508772&rendertype=abstract
71.
Carlfjord
S
,
Andersson
A
,
Bendtsen
P
,
Nilsen
P
,
Lindberg
M
.
Applying the RE-AIM framework to evaluate two implementation strategies used to introduce a tool for lifestyle intervention in Swedish primary health care.
Health Promot Int [Internet]
2012
[cited 2018 Aug 10];27(2):167–76. Available from: https://academic.oup.com/heapro/article/27/2/167/682242
72.
Glasgow
RE
,
Estabrooks
PE
.
Pragmatic Applications of RE-AIM for Health Care Initiatives in Community and Clinical Settings.
Prev Chronic Dis [Internet]
2018
[cited 2018 Aug 10];15. Available from: https://www.cdc.gov/pcd/issues/2018/17_0271.htm
73.
McCloud
RF
,
Okechukwu
CA
,
Sorensen
G
,
Viswanath
K
.
Entertainment or Health?
Exploring the Internet Usage Patterns of the Urban Poor: A Secondary Analysis of a Randomized Controlled Trial. J Med Internet Res [Internet]
2016
[cited 2018 Jun 11];18(3). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4796406/
74.
Schartman-Cycyk
S
,
Meissier
K
. Bridging the gap: what affordable, uncapped internet means for digital inclusion [Internet]. Mobile Beacon;
2017
[cited 2017 May 20]. Available from: http://www.mobilebeacon.org/wp-content/uploads/2017/05/MB_ResearchPaper_FINAL_WEB.pdf
75.
Anderson
M
,
Perrin
A
,
Jiang
J
. 11% of Americans don’t use the internet. Who are they? [Internet]. Pew Res. Cent.
2018
[cited 2018 Apr 13];Available from: http://www.pewresearch.org/fact-tank/2018/03/05/some-americans-dont-use-the-internet-who-are-they/
76.
Internet/Broadband Fact Sheet [Internet]. Pew Research Center;
2017
[cited 2017 Nov 7]. Available from: http://www.pewinternet.org/fact-sheet/internet-broadband/
77.
Anderson
M
,
Perrin
A
. Technology adoption climbs among older adults [Internet].
2017
[cited 2017 May 20]. Available from: http://www.pewinternet.org/2017/05/17/technology-use-among-seniors/
78.
Anderson
M
.
Digital divide persists even as lower-income Americans make gains in tech adoption
[
Internet
]
Washington (DC)
:
Pew Research Center
;
2017
., Available from http://www.pewresearch.org/fact-tank/2017/03/22/digital-divide-persists-even-as-lower-income-americans-make-gains-in-tech-adoption/#
79.
Horrigan
JB
. Digital Readiness Gaps [Internet]. Pew Research Center;
2016
[cited 2017 Feb 14]. Available from: http://www.pewinternet.org/2016/09/20/digital-readiness-gaps/
80.
Anderson
M
,
Horrigan
JB
. Smartphones may not bridge digital divide for all [Internet]. Factank News Numbers
2016
[cited 2016 Nov 8];Available from: http://www.pewresearch.org/fact-tank/2016/10/03/smartphones-help-those-without-broadband-get-online-but-dont-necessarily-bridge-the-digital-divide/
81.
Perzynski
AT
,
Roach
MJ
,
Shick
S
,
Callahan
B
,
Gunzler
D
,
Cebul
R
, et al
Patient portals and broadband internet inequality. J Am Med Inform Assoc [Internet]
2017
[cited 2017 Apr 14];Available from: https://academic.oup.com/jamia/article/doi/10.1093/jamia/ocx020/3079333/Patient-portals-and-broadband-internet-inequality
82.
Ackerman
SL
,
Sarkar
U
,
Tieu
L
,
Handley
MA
,
Schillinger
D
,
Hahn
K
, et al
Meaningful use in the safety net: a rapid ethnography of patient portal implementation at five community health centers in California. J Am Med Inform Assoc [Internet]
2017
[cited 2017 Mar 22];Available from: https://academic.oup.com/jamia/article-abstract/doi/10.1093/jamia/ocx015/3072321/Meaningful-use-in-the-safety-net-a-rapid?redirectedFrom=fulltext
83.
Tieu
L
,
Schillinger
D
,
Sarkar
U
,
Hoskote
M
,
Hahn
KJ
,
Ratanawongsa
N
, et al
.
Online patient websites for electronic health record access among vulnerable populations: portals to nowhere?
[Internet]
.
J Am Med Inform Assoc
.
2017
Apr
;
24
e1
:
e47
54
. Available from: https://academic.oup.com/jamia/article-abstract/24/e1/e47/2631487/Online-patient-websites-for-electronic-health?redirectedFrom=fulltext
[PubMed]
1527-974X
84.
Lyles
C
,
Schillinger
D
,
Sarkar
U
. Connecting the Dots: Health Information Technology Expansion and Health Disparities. PLOS Med [Internet]
2015
[cited 2017 Jan 17];12(7):e1001852. Available from: http://dx.plos.org/10.1371/journal.pmed.1001852
85.
Stellefson
M
,
Chaney
B
,
Barry
AE
,
Chavarria
E
,
Tennant
B
,
Walsh-Childers
K
, et al
Web 2.0 Chronic Disease Self-Management for Older Adults: A Systematic Review. J Med Internet Res [Internet]
2013
[cited 2017 Aug 4];15(2):e35. Available from: http://heb.hhp.ufl.edu/wp-content%5Cuploads/web2.0_stellefson.pdf
86.
Thies
K
,
Anderson
D
,
Cramer
B
. Lack of Adoption of a Mobile App to Support Patient Self-Management of Diabetes and Hypertension in a Federally Qualified Health Center: Interview Analysis of Staff and Patients in a Failed Randomized Trial. JMIR Hum Factors [Internet]
2017
;4(4):e24. Available from: http://www.ncbi.nlm.nih.gov/pubmed/28974481
87.
Chaiyachati
KH
,
Hubbard
RA
,
Yeager
A
,
Mugo
B
,
Lopez
S
,
Asch
E
, et al
.
Association of Rideshare-Based Transportation Services and Missed Primary Care Appointments: A Clinical Trial.
JAMA Intern Med [Internet]
2018
[cited 2018 Mar 15];178(3):383–9. Available from: https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2671405
88.
Moore
JO
. Technology-supported apprenticeship in the management of chronic disease [Internet].
2013
[cited 2018 Aug 25];Available from: http://dspace.mit.edu/handle/1721.1/91853

Additional information

A.R.S. is the lead author and B.W., Y.S., and A.A. are co-authors.