The use of digital phenotyping continues to expand across all fields of health. By collecting quantitative data in real-time using devices such as smartphones or smartwatches, researchers and clinicians can develop a profile of a wide range of conditions. Smartphones contain sensors that collect data, such as GPS or accelerometer data, which can inform secondary metrics such as time spent at home, location entropy, or even sleep duration. These metrics, when used as digital biomarkers, are not only used to investigate the relationship between behavior and health symptoms but can also be used to support personalized and preventative care. Successful phenotyping requires consistent long-term collection of relevant and high-quality data. In this paper, we present the potential of newly available, for approved research, opt-in SensorKit sensors on iOS devices in improving the accuracy of digital phenotyping. We collected opt-in sensor data over 1 week from a single person with depression using the open-source mindLAMP app developed by the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center. Five sensors from SensorKit were included. The names of the sensors, as listed in official documentation, include the following: phone usage, messages usage, visits, device usage, and ambient light. We compared data from these five new sensors from SensorKit to our current digital phenotyping data collection sensors to assess similarity and differences in both raw and processed data. We present sample data from all five of these new sensors. We also present sample data from current digital phenotyping sources and compare these data to SensorKit sensors when applicable. SensorKit offers great potential for health research. Many SensorKit sensors improve upon previously accessible features and produce data that appears clinically relevant. SensorKit sensors will likely play a substantial role in digital phenotyping. However, using these data requires advanced health app infrastructure and the ability to securely store high-frequency data.

The emergence of digital phenotyping, defined as “the moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices” [1], offers a paradigm shift across fields which study the dynamic of human physiology, emotions, behavior, and health symptoms [2]. Personal devices like smartphones and smartwatches can capture and record temporally dense, multimodal, longitudinal, and quasi-real-time data. From these data, secondary metrics measuring behavioral patterns and health conditions like sleep quality and socialness can be computed, and human behavior can be categorized into observable patterns. These patterns serve as digital biomarkers for conditions ranging from schizophrenia to spinal cord disorders [3, 4].

Digital phenotyping has great potential. For example, digital phenotyping can provide the data required for just-in-time adaptive interventions triggered by vulnerable states and delivered at opportune moments [5, 6]. It already has applications in health prevention, screening, diagnosis, monitoring [7‒10]. Not only can quantitative patterns in data uncover relationships between conditions but they can also be used to predict the onset of symptom severity.

Effective, replicable, and reliable digital phenotyping requires access to high-quality and relevant raw sensor data. Mobile devices contain sensors which passively record information surrounding user activity, including but not limited to geolocation, motion, and exercise. Native device software sometimes processes data into secondary metrics automatically (e.g., step count), but most data can be collected in its raw form (e.g., accelerometer, GPS). Despite the interpretability of preprocessed metrics, there may be advantages to collecting raw data directly. Raw data permits novel analysis and the creation of new biomarkers with high clinical relevance. For example, by combining raw accelerometer and screen state (off/on) data from a smartphone, it is feasible to use algorithms that estimate sleep duration [11].

Clinicians at the Division of Digital Psychiatry at BIDMC integrate digital phenotyping to augment clinical care already [12]. In addition to standard clinical assessments, patients use the open-source mindLAMP app to complete real-time surveys about their mental health, engage in therapy exercises between clinical visits, and passively capture digital phenotyping signals related to underlying health behaviors [13]. Providing ecological momentary assessments with passive data collection using mindLAMP can capture information in a patient’s natural environment, providing more avenues for understanding dynamic trends in illness/recovery in addition to standard clinical assessments. This allows for more personalized care and better shared clinical decision-making. Patients and clinicians both receive weekly phenotyping summaries including correlations between mental health survey scores and processed behavioral metrics from digital phenotyping data (e.g., mood vs. time spent at home, anxiety vs. sleep duration, stress vs. steps). These digital biomarkers can build emotional self-awareness and facilitate discussion with the clinical team [14]. In addition, although beyond the scope of this paper, patients can complete therapy-related skill-building and practice exercises on mindLAMP that best align with weekly treatment goals.

Previous studies surrounding digital phenotyping and mental health have demonstrated success. For example, mindLAMP research has observed associations between access to green space (as measured by GPS mapping) and mental health symptoms [15]. Anomaly detection models have demonstrated how aberrations in mindLAMP sensor data captured over months can provide early warning signs for relapse in people with serious mental illnesses [3]. Digital phenotyping data analysis displays versatility. The data used in these studies, as well as their analytical pipelines, can be applied to other disorders. Digital data can quantify psychosocial well-being trajectories after spinal cord injury or explore the role of cognition in Parkinson’s disease [16]. Although smartphone data analysis has shown potential toward more personalized and better preventative care, assessing its full potential requires exploration at scale.

In recent years, larger studies have offered a more nuanced picture of the field. This picture, while confirming the potential of digital phenotyping, also highlights some of its challenges. For example, correlations between digital phenotyping data and clinical outcomes may appear smaller than expected when investigated with larger sample sizes [17]. Other larger digital phenotyping studies have even reported that data quality concerns (related to the amount of sensor data captured vs. expected) preclude any reliable formal analysis [18]. To assess the full potential of digital phenotyping, the field requires replication of study results. However, many app research platforms are unstable. For example, the app Purple Robot was used to conduct many early and impressive digital phenotyping studies but today is no longer available or supported [19]. For those apps still in existence, a lack of standards around digital phenotyping data prevents easy replication or reliable reporting. Going forward, the digital phenotyping approach to research and clinical care requires robust, continuously maintained, and research-friendly digital phenotyping infrastructure with open standards for replication.

Toward that end, we explore an advancement in improving the reliability of digital phenotyping: the advent of SensorKit by Apple, now supported on iPhones. SensorKit allows carefully selected and approved smartphone applications to access new digital phenotyping metrics related to step count, accelerometer, rotation rate, ambient light in the physical environment, and commute or travel habits. Even with SensorKit available on an app, users must opt into sharing each sensor. Given the sensitive nature of all digital phenotyping data, Apple vets all requests for SensorKit and only permits its use only in Ethics Board-/IRB-approved research.

In this paper, we introduce and describe the new sources of data made available by SensorKit. To show our ability to collect these data, and its structure toward helping the field better understand its nature, potential uses, and benefits/limitations, we present data from a single patient enrolled in a research study (IRB #202000956, BIDMC). This patient used mindLAMP to passively collect sensor data with SensorKit installed with the goal of assessing how this new digital phenotyping data compares to existing approaches. Because of the novelty of SensorKit data, we do not draw any clinical conclusions in this paper but instead aim to introduce SensorKit to the field while identifying targets for future research surrounding SensorKit and digital phenotyping as a whole.

Sample SensorKit data in this study were captured over 1 week in a single person with depression using the open-source mindLAMP app. SensorKit was approved for use by both Apple and the BIDMC IRB (#202000956) in this study. The SensorKit API was integrated into mindLAMP and data were retrieved using the mindLAMP API through Python 3.8. Data produced by these sensors differ in format and temporal resolution. Therefore, although collected over the same range of time, not all samples in this paper are reported over the same range of time. Apps using SensorKit are required to obtain user consent as the first step as stated in the developer documentation (https://developer.apple.com/documentation/sensorkit) below:

“When your app attempts to read sensor information for the first time on a user’s device, the system presents a sheet that explains your app’s study and the information your app collects. The sheet, at the time of this writing in 2023, prompts the user to approve access to personal information at a granular level, and your app can let the user know which information is essential to the study. You supply the study purpose, requested sensors, and privacy policy URL in the project’s Info.plist.”

Upon assigning sensors, participants are notified by iOS, and are prompted to accept or deny the collection of data. They can later modify their choices in iOS settings. Data from five sensors were collected in this study. The names of the sensors, as listed in official documentation, include the following: phone usage, messages usage, visits, device usage, and ambient light. It should be noted that SensorKit offers more sensors than the ones we report in this paper. Because Apple grants separate permission for each sensor, we report only on sensors for which we obtained permission to use them in this study. A full list of SensorKit sensors can be viewed in Apple’s official documentation.

This study also captured digital phenotyping data independently from SensorKit using the existing sensor data capturing and processing infrastructure of mindLAMP. Upon these data, we can apply our data analysis pipeline called LAMP-cortex to convert raw data into secondary information. This pipeline, a Python 3 package available on GitHub, consists of a hierarchy of data processing tools called “features” (Fig. 1). Data can be queried directly from the LAMP API in real-time using cortex. Cortex features are Python functions that process this raw data into more interpretable metrics, and are categorized into three groups: secondary, primary, and raw. Terminal and fully processed cortex features are named “secondary” features. Features which are computed from raw data and then used in the computation of other features are called “primary” features. “Raw” features simply query raw data from the mindLAMP API, but convert the raw data into simple, convenient, and vectorized outputs. Some of the sensors offered by SensorKit are comparable to other sensors or features are already available through cortex. Therefore, in this paper, we compare raw data collected using SensorKit sensors and queried via the LAMP API to analogous raw data queried and processed using cortex. Eventually, we also plan to incorporate SensorKit sensors into cortex as raw features, from which we will produce new primary and secondary features.

Fig. 1.

Cortex feature hierarchy.

Fig. 1.

Cortex feature hierarchy.

Close modal

Below, we summarize data formats and present sample data collected from SensorKit sensors. Although the mindLAMP API reports raw data as obtained by iOS sensors, the schema of the reported data is specific to the mindLAMP API. In some cases, the data itself may be different between platforms in addition to the blueprint. For example, the mindLAMP API automatically converts units of timestamp to epoch time (milliseconds since 1970). The sampling rate of each sensor is at a default of 5 Hz, but software settings can decrease collection frequency. For example, GPS will not be collected continuously if users only allow GPS data collection when the mindLAMP application is open and running (though during periods of collection, frequency will remain at 5 Hz or whatever frequency was defined by the researcher managing the mindLAMP account). The mindLAMP API also processes raw data to a minor extent automatically by rejecting duplicate data points (where the content and timestamp of a data point are identical between two or more data points). We report sample data below, as well as comparisons to existing cortex features, when applicable.

Phone Usage

This SensorKit sensor collects information regarding phone call events (Table 1).

Table 1.

SensorKit sensor collects information regarding phone call events

DateCall duration, sOutgoing callsIncoming callsUnique contacts
6 Oct 2022 1,851 
7 Oct 2022 807 
8 Oct 2022 224 
DateCall duration, sOutgoing callsIncoming callsUnique contacts
6 Oct 2022 1,851 
7 Oct 2022 807 
8 Oct 2022 224 

The comparable data sensor native to mindLAMP, called “telephony,” also reports instances of phone calls. For comparison, we show a bar chart of results for total call number as reported by SensorKit and telephony over a sample 3-day period where data from both sensors were available (Fig. 2). Of note, neither SensorKit nor mindLAMP report on the phone numbers or content or messages associated with any phone call.

Fig. 2.

Comparison of sensors.

Fig. 2.

Comparison of sensors.

Close modal

Messages Usage

This sensor reports general texting activity by a participant (Table 2). Before this sensor, mindLAMP could not record text activity. SensorKit does not report any phone numbers or message content.

Table 2.

SensorKit reports general texting activity by a participant

DateIncoming messagesOutgoing messagesUnique contacts
26 Sep 2022 95 28 
27 Sep 2022 105 62 
28 Sep 2022 88 61 
DateIncoming messagesOutgoing messagesUnique contacts
26 Sep 2022 95 28 
27 Sep 2022 105 62 
28 Sep 2022 88 61 

Visits

This SensorKit sensor collects anonymized metrics derived from GPS data (Table 3).

Table 3.

SensorKit anonymized metrics derived from GPS data

DateLocation categoryLocation typeTime spent at locationDistance from home
27 Sep 2022 Gym 1800 1,320 
27 Sep 2022 Home 36000 
27 Sep 2022 Work 29700 3,324 
DateLocation categoryLocation typeTime spent at locationDistance from home
27 Sep 2022 Gym 1800 1,320 
27 Sep 2022 Home 36000 
27 Sep 2022 Work 29700 3,324 

Descriptions of data types include the following:

  • Location category: the type of location reported as an integer value. In the table, “location type” refers to the corresponding string name of the integer category.

  • Time spent at location: date range reported in seconds, computed from the difference between arrival and departure intervals.

  • Distance from home: distance from home location reported in meters.

The SensorKit visits feature has similarities to the native mindLAMP feature called “GPS.” “GPS” reports raw data in terms of longitude, latitude, and timestamps, from which a wide variety of secondary metrics can be computed. For example, this mindLAMP raw location data can be used to recreate the visit features in SensorKit as well as derive novel ones such as levels of exposed pollution by a participant across a defined period of time. The SensorKit visits feature presents up to 5 different location categories but does not offer exact coordinates unlike mindLAMP.

Device Usage

This SensorKit sensor broadly reports the timestamps and durations of device usage events by participants. Device usage data includes total unlocks, unlocked duration (screen time), and total screen wakes (Table 4). Device usage also breaks down phone usage into specific details when possible.

Table 4.

SensorKit reports the timestamps and durations of device usage events by participants

DateTotal unlocksUnlocked durationTotal screen wakes
12 Oct 2022 152 17,774 182 
13 Oct 2022 127 33,502 145 
14 Oct 2022 42 3,971 51 
DateTotal unlocksUnlocked durationTotal screen wakes
12 Oct 2022 152 17,774 182 
13 Oct 2022 127 33,502 145 
14 Oct 2022 42 3,971 51 

Descriptions of data collected include the following:

  • Total unlocks: the number of times the user unlocked their phone using a passcode or face ID (if applicable).

  • Unlocked duration: the total time the device remained in an unlocked state during the report (seconds).

  • Total screen wakes: the total number of events in which the screen awoke from a sleep state.

While in use, device usage also collects specific semantic details regarding mobile application activity, notifications received, and web browsing activity (i.e., time spent browsing the internet broken into categories of website type). Device usage categorizes these data into broad categories of application or website domain types. For instance, it is possible to collect and report time spent using sports or healthcare applications or websites, as well as the number of notifications received from applications of those same categories (Fig. 3). A full list of these properties and their categories can be found in Apple’s official documentation.

Fig. 3.

Application usage by category over 1 week.

Fig. 3.

Application usage by category over 1 week.

Close modal

This sensor can also be used to break down application usage by time of the day, allowing patients and clinicians to track behavior on a precise temporal scale. We present a clock plot depicting average minutes spent using social media by time of day as observed by SensorKit data over the course of 1 week (Fig. 4).

Fig. 4.

Average number of minutes spent using social media by hour of the day.

Fig. 4.

Average number of minutes spent using social media by hour of the day.

Close modal

The new device usage data have clear similarities to the native mindLAMP “device state” sensor. The “device state” sensor reports device status in one of four possibilities (on and unlocked, on and locked, off and unlocked, off and locked). For comparison, we created a bar chart showing total unlocks as reported by device usage versus total unlocks as reported by “device state” across a 5-day sample where data from both sensors was available (Fig. 5). However, the native mindLAMP function “device state” cannot report on the types of websites visited or apps opened. Of note, neither SensorKit nor mindLAMP report on the exact website opened or any content viewed or entered in the website or app.

Fig. 5.

Comparison of sensors.

Fig. 5.

Comparison of sensors.

Close modal

Ambient Light

This SensorKit sensor collects information from light captured by the device’s camera (Table 5). This sensor reports the chromaticity values of the observed light sources as (x, y) pairs. We binned ambient light data into periods of 15 min and summed the number of captured light events over each bin (Fig. 6). The near-cyclic periods of captured light and lack of captured light likely reflect sleep-wake periods. This functionality is not available in native mindLAMP without SensorKit.

Table 5.

SensorKit light captured by the device’s camera

Light SampleLuxChromaticity
10 (0.34, 0.36) 
11 (0.33, 0.35) 
Light SampleLuxChromaticity
10 (0.34, 0.36) 
11 (0.33, 0.35) 
Fig. 6.

Number of observed light events over time.

Fig. 6.

Number of observed light events over time.

Close modal

Descriptions of data collected include the following:

  • Lux: describes the luminous flux per unit area of the light source in units of lx (lumen/m/m).

  • Chromaticity: an (x, y) pair describing the hue (shade of color) and colorfulness (intensity of color) of a light source.

In this paper, we report sample data collected from the following five sensors included in Apple’s SensorKit: phone usage, messages usage, visits, device usage, and ambient light. Our results suggest SensorKit generates unique data not available with other digital phenotyping apps, but that integrating it into existing apps is feasible and can produce rich data with broad healthcare applications.

SensorKit data can be raw data or processed. Ambient light chromaticity, for example, reports hue and colorfulness as raw (x, y) pairs. On the contrary, the visits sensor reports GPS data processed into distinct and broad location categories. Although raw data collection generally shows more potential for analysis, as it allows greater flexibility in secondary metric computation, raw GPS causes privacy concerns and a data storage burden. A simple process such as converting data into features (e.g., home time) and averaging these features across days can minimize privacy risks and data storage requirements. While the details of how Apple determines location categories remain unclear, and while changing the algorithm without alerting the research community could threaten reproducible research, the location data presented by SensorKit may be appealing to many originations or smaller teams that cannot maintain the necessary infrastructure to protect raw data or process it into features. Therefore, in this case, the preprocessed nature of visits data may be an exception to the general advantages of raw data collection.

In this paper, we also compared two of the new SensorKit sensors to previously available sensors. Specifically, we compared phone usage and device usage to “telephony” and “device state,” respectively. These sensors appear to mostly agree with each other, although there were small differences around the number of phone unlock events. Determining the level of agreement between these sensors with precision would require a thorough investigation with more data and more participants or research using simulation with an exact number of phone lock events conducted per day. Overall, device usage offers screen activity information to a level of detail that far surpasses the current capabilities of any digital phenotyping app. Access to the category of each website or app opened by a user adds a new level of contextual richness to screen time without revealing any exact website or screen content. This context has broad potential. For example, this new data stream could help determine the relationship between anxiety and social media activity or between screen use and mental health overall.

Two of the SensorKit sensors presented in this report are completely novel (ambient light and messages usage) and can be integrated into research or clinical practice. Text activity data can build upon prior studies on phone activity [5, 6]. Ambient light data can indicate circadian rhythms. Levels of ambient light would likely improve Bayesian models of sleep estimation. Estimated sleep onset and offset parameters could be informed by prior distributions of ambient light levels (i.e., a participant is more likely to be sleeping during times of low levels of ambient light), among other variables. Integrating more variables into such models will improve their precision and accuracy.

The introduction of new sources of data enables research on relationships which previously could not be recorded. For example, as mentioned, SensorKit allows for the investigation of the relationship between texting activity and temporal anxiety levels. Collecting new data streams could help patients and clinicians alike to determine which metrics are associated with worsening or improving symptoms by correlating behavior with survey scores. For example, clinicians can present data visualizations to patients to better understand trends in behavior and how these trends associate with symptom changes. Metrics derived from SensorKit sensors can also be incorporated into machine learning models. For example, mindLAMP has previously applied its digital phenotyping data to anomaly detection models to predict relapse risk of schizophrenia and, using a separate similarity matrix-based model, to determine the relationship between behavioral abnormalities and clinical scores [20]. Adding novel socialness (messaging activity) and more accurate sleep features (using ambient light) would likely improve both the sensitivity and specificity of relapse prediction.

Some limitations to the SensorKit sensors include chiefly that it is designed to be used on a platform compatible with Apple’s APIs. Obtaining access to SensorKit sensors requires direct permission from Apple and local IRB approval. Because each sensor requires individual permission from patients, it may be difficult to consistently obtain consent and enforce study protocols involving SensorKit data collection. The lack of raw data for visits presents many advantages but could pose a challenge to replication. Limitations of this paper itself include the use of data from a single person to explore the potential of SensorKit without making any direct or indirect clinical claims. Although useful in showcasing the potential of SensorKit, analyzing data from a single participant in this paper precludes the possibility of creating reliable digital phenotypes. But this case example and others n = 1 reports of digital tools do offer value [21‒23]. Further research should compare data both between and within groups of patients with differences in clinical diagnosis. Further research should also involve more participants on a longitudinal basis. This would ensure the reliability of these data over time and that it can detect important differences between and within patient groups. This paper also does not include data from some of the SensorKit sensors, as we did not collect data from sensors for which we did not request permission from Apple. Additionally, the device usage sensor records application activity broken down into broad categories of application type. The category to which each app belongs is determined by the application developers and may not agree with clinician or patient expectations.

In conclusion, SensorKit offers broad potential for health research as it unlocks novel digital phenotyping signals and opportunities for truly scalable research. The flexible nature of deploying SensorKit means that it can be run in addition to native digital phenotyping apps like mindLAMP which offers a novel means for assessing the validity of data and improving prediction models. If SensorKit is widely adopted, it may offer a trusted platform from which health fields can better explore the potential of digital phenotyping at scale.

SensorKit was approved for use by both Apple and the BIDMC IRB (#202000956) in this study. All participants signed written informed consent.

T.K. is affiliated with the Centre for Digital Health Interventions, a joint initiative of the Institute for Implementation Science in Health Care, University of Zurich, the Department of Management, Technology, and Economics at ETH Zurich, and the Institute of Technology Management and School of Medicine at the University of St. Gallen. Center for Digital Health Interventions is funded in part by CSS, a Swiss health insurer. T.K. is also a co-founder of Pathmate Technologies, a university spin-off company that creates and delivers digital clinical pathways. However, neither CSS nor Pathmate Technologies was involved in this research. A.C. is supported by the National Institute for Health Research (NIHR) Oxford Cognitive Health Clinical Research Facility, by an NIHR Research Professorship (Grant RP-2017-08-ST2-006), by the NIHR Oxford and Thames Valley Applied Research Collaboration and by the NIHR Oxford Health Biomedical Research Centre (Grant BRC-1215-20005). The views expressed are those of the authors and not necessarily those of the UK National Health Service, the NIHR, or the UK Department of Health; he has received research, educational, and consultancy fees from INCiPiT (Italian Network for Pediatric Trials), CARIPLO Foundation, Lundbeck, and Angelini Pharma. He is the CI/PI of two trials about seltorexant in depression, sponsored by Janssen. The other authors have no conflicts of interest to declare.

The authors received no funding for this report.

Carsten Langholm and John Torous conceived of this analysis and wrote the first draft. Carsten Langholm performed the analysis. All five authors as noted on the authorship page (Carsten Langholm, Tobias Kowatsch, Sandra Bucci, Andrea Cipriani, and John Torous contributed to writing, editing, and reviewing the manuscript).

De-identified sample data can be obtained upon request but raw data cannot be shared given how the nature of consent used in this study. Code for all features in this paper, including raw sensors, can be obtained from https://github.com/BIDMCDigitalPsychiatry/LAMP-cortex. Further inquiries can be directed to the corresponding author.

1.
Torous
J
,
Kiang
MV
,
Lorme
J
,
Onnela
JP
.
New tools for new research in psychiatry: a scalable and customizable platform to empower data driven smartphone research
.
JMIR Mental Health
.
2016 May 5
3
2
e16
.
2.
DeBoever
C
,
Tanigawa
Y
,
Aguirre
M
,
McInnes
G
,
Lavertu
A
,
Rivas
MA
.
Assessing digital phenotyping to enhance genetic studies of human diseases
.
Am J Hum Genet
.
2020 May 7
106
5
611
22
.
3.
Henson
P
,
D’Mello
R
,
Vaidyam
A
,
Keshavan
M
,
Torous
J
.
Anomaly detection to predict relapse risk in schizophrenia
.
Transl Psychiatry
.
2021 Jan 11
11
1
28
.
4.
Mercier
HW
,
Hamner
JW
,
Torous
J
,
Onnela
JP
,
Taylor
JA
.
Digital phenotyping to quantify psychosocial well-being trajectories after spinal cord injury
.
Am J Phys Med Rehabil
.
2020 Dec
99
12
1138
44
.
5.
Mishra
V
,
Künzler
F
,
Kramer
JN
,
Fleisch
E
,
Kowatsch
T
,
Kotz
D
.
Detecting receptivity for mHealth interventions in the natural environment
.
Proc ACM Interact Mob Wearable Ubiquitous Technol
.
2021 Jun
5
2
74
.
6.
Künzler
F
,
Mishra
V
,
Kramer
JN
,
Kotz
D
,
Fleisch
E
,
Kowatsch
T
.
Exploring the state-of-receptivity for mHealth interventions
.
Proc ACM Interact Mob Wearable Ubiquitous Technol
.
2019 Dec 11
3
4
140
27
.
7.
Banholzer
N
,
Feuerriegel
S
,
Fleisch
E
,
Bauer
GF
,
Kowatsch
T
.
Computer mouse movements as an indicator of work stress: longitudinal observational field study
.
J Med Internet Res
.
2021 Apr 2
23
4
e27121
.
8.
Tinschert
P
,
Rassouli
F
,
Barata
F
,
Steurer-Stey
C
,
Fleisch
E
,
Puhan
MA
.
Nocturnal cough and sleep quality to assess asthma control and predict attacks
.
J Asthma Allergy
.
2020 Dec 14
13
669
78
.
9.
Rassouli
F
,
Tinschert
P
,
Barata
F
,
Steurer-Stey
C
,
Fleisch
E
,
Puhan
MA
.
Characteristics of asthma-related nocturnal cough: a potential new digital biomarker
.
J Asthma Allergy
.
2020 Dec 3
13
649
57
.
10.
Barata
F
,
Tinschert
P
,
Rassouli
F
,
Steurer-Stey
C
,
Fleisch
E
,
Puhan
MA
.
Automatic recognition, segmentation, and sex assignment of nocturnal asthmatic coughs and cough epochs in smartphone audio recordings: observational field study
.
J Med Internet Res
.
2020 Jul 14
22
7
e18082
.
11.
Staples
P
,
Torous
J
,
Barnett
I
,
Carlson
K
,
Sandoval
L
,
Keshavan
M
.
A comparison of passive and active estimates of sleep in a cohort with schizophrenia
.
NPJ Schizophr
.
2017 Oct 16
3
1
37
.
12.
Rodriguez-Villa
E
,
Rauseo-Ricupero
N
,
Camacho
E
,
Wisniewski
H
,
Keshavan
M
,
Torous
J
.
The digital clinic: implementing technology and augmenting care for mental health
.
Gen Hosp Psychiatry
.
2020 Jun 30
66
59
66
.
13.
Vaidyam
A
,
Halamka
J
,
Torous
J
.
Enabling research and clinical use of patient-generated health data (the mindLAMP platform): digital phenotyping study
.
JMIR Mhealth Uhealth
.
2022 Jan 7
10
1
e30557
.
14.
Rauseo-Ricupero
N
,
Henson
P
,
Agate-Mays
M
,
Torous
J
.
Case studies from the digital clinic: integrating digital phenotyping and clinical practice into today’s world
.
Int Rev Psychiatry
.
2021 Jun
33
4
394
403
.
15.
Henson
P
,
Pearson
JF
,
Keshavan
M
,
Torous
J
.
Impact of dynamic greenspace exposure on symptomatology in individuals with schizophrenia
.
PLoS One
.
2020 Sep 3
15
9
e0238498
.
16.
Weizenbaum
EL
,
Fulford
D
,
Torous
J
,
Pinsky
E
,
Kolachalama
VB
,
Cronin-Golomb
A
.
Smartphone-based neuropsychological assessment in Parkinson’s Disease: feasibility, validity, and contextually driven variability in cognition
.
J Int Neuropsychol Soc
.
2022 Apr
28
4
401
13
.
17.
Currey
D
,
Torous
J
.
Digital phenotyping correlations in larger mental health samples: analysis and replication
.
BJPsych Open
.
2022 Jun 3
8
4
e106
.
18.
Matcham
F
,
Leightley
D
,
Siddi
S
,
Lamers
F
,
White
KM
,
Annas
P
.
Remote Assessment of Disease and Relapse in Major Depressive Disorder (RADAR-MDD): recruitment, retention, and data availability in a longitudinal remote measurement study
.
Eur Psychiatr
.
2022 Feb 21
65
S1
S112
.
19.
Schueller
SM
,
Mohr
DC
,
Begale
M
,
Penedo
FJ
,
Mohr
DC
.
Purple: a modular system for developing and deploying behavioral intervention technologies
.
J Med Internet Res
.
2014 Jul 30
16
7
e181
.
20.
D’Mello
R
,
Melcher
J
,
Torous
J
.
Similarity matrix-based anomaly detection for clinical intervention
.
Sci Rep
.
2022 Jun 2
12
1
9162
.
21.
Izmailova
ES
,
Ellis
RD
.
When work hits home: the cancer-treatment journey of a clinical scientist driving digital medicine
.
JCO Clin Cancer Inform
.
2022 Sep
6
e2200033
.
22.
Wu
C
.
Digital health in the era of personalized healthcare: opportunities and challenges for bringing research and patient care to a new level
.
Digital health: mobile and wearable devices for participatory health applications
.
2020 Nov 14
. p.
7
.
23.
Vaidyam
A
,
Roux
S
,
Torous
J
.
Patient innovation in investigating the effects of environmental pollution in schizophrenia: case report of digital phenotyping beyond apps
.
JMIR Mental Health
.
2020 Aug
3
8
e19778.