Current measures of neurodegenerative diseases are highly subjective and based on episodic visits. Consequently, drug development decisions rely on sparse, subjective data, which have led to the conduct of large-scale phase 3 trials of drugs that are likely not effective. Such failures are costly, deter future investment, and hinder the development of treatments. Given the lack of reliable physiological biomarkers, digital biomarkers may help to address current shortcomings. Objective, high-frequency data can guide critical decision-making in therapeutic development and allow for a more efficient evaluation of therapies of increasingly common disorders.

Failure is the norm in drug development, especially for neurodegenerative disorders. In comparison to other therapeutic areas, drug development for neurological disorders generally takes longer, costs more, and is less likely to succeed (Table 1) [1]. As success is less common, the unmet medical need is greater. Neurodegenerative disorders, which are already among the leading causes of disability in industrialized countries (Fig. 1), will become increasingly prevalent as the proportion of the world’s population over 65 will likely double from 8.5% worldwide to nearly 17% or 1.6 billion people by 2050 [2].

Table 1.

Comparison of drug development within and outside the central nervous system

Comparison of drug development within and outside the central nervous system
Comparison of drug development within and outside the central nervous system
Fig. 1.

US disease burden by therapeutic area. Disability-adjusted life years, years lost due to ill health, disability, and early death. Source: Murray et al. [61].

Fig. 1.

US disease burden by therapeutic area. Disability-adjusted life years, years lost due to ill health, disability, and early death. Source: Murray et al. [61].

Close modal

Failure, when it occurs, should occur early and efficiently, limiting unnecessary downstream costs. However, some of the largest clinical trials ever conducted for neurodegenerative disorders have failed in phase 3 despite encouraging phase 2 clinical trial results (Table 2) [3]. In some cases, not only have the trials failed to confirm earlier signals but were also stopped early due to futility, defined as limited likelihood of eventual success [4-7]. These failures late in development carry enormous human and economic costs and can deter future investments [3]. Venture capital funding for neurology-focused companies has decreased by 40% since 2004. In addition, from 2004–2008 to 2009–2013, funding for novel neurology research and development has dropped 56% [8]. While the limited ability of experimental therapeutics to affect underlying biological processes is a common cause of failure, current outcome mea sures – artificial, imperfect, and biased snapshots of clinical status – are another.

Table 2.

Recent phase III trials in neurodegenerative disorders that failed to replicate phase II findings

Recent phase III trials in neurodegenerative disorders that failed to replicate phase II findings
Recent phase III trials in neurodegenerative disorders that failed to replicate phase II findings

Unlike most other organ systems, direct measures of brain function and pathology are limited. Biomarkers from blood or cerebrospinal fluid are still largely in their infancy and have failed to contribute to new drug registrations. Imaging, especially MRI, has helped revolutionize therapeutic development for multiple sclerosis [9] but has had more modest utility in other neurodegenerative conditions. Similarly, functional imaging modalities, including PET and SPECT, have facilitated the diagnosis of Parkinson disease but have limited benefit to measure disease progression or response to therapy [10]. In the absence of well-established and validated biomarkers of diagnosis or disease progression, the primary means for assessing the efficacy of novel therapies for neurodegenerative disorders are largely a combination of clinician-administered rating scales and patient-reported outcomes such as the Alzheimer’s Disease Assessment Scale-Cognitive Subscale [11] and the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale [12]. However, these scales have substantial limitations, which can be mitigated in part by digital biomarkers.

Current clinical outcome measures for neurodegenerative disorders are limited because they are (1) rater dependent, (2) highly variable, and (3) episodic. By their nature, rating scales are dependent on the raters, who have variable expertise and experience. Although many studies have established a certain level of inter-rater and intra-rater reliability, results tend to arbitrarily establish “moderate to high” reliability and lack agreement between studies [13, 14]. Studies often recommend the use of more specialized raters such as physicians over nurse practitioners and for the same rater to follow participants longitudinally, since intra-rater reliability is better than inter-rater reliability [13]. However, given that adequately trained raters are few, expensive to utilize, and frequently located only in specialized centers, this functions as another roadblock to efficient therapeutic development. Furthermore, raters are subject to bias – conscious or unconscious tendencies to distort the truth. While blinding of interventions can minimize potential bias, clinicians are subject to expectations on the effectiveness of drugs. For example, in a recent clinical trial of pridopidine in Huntington disease, in which 2 previous trials suggested an improvement in the total motor score in the active arm [15, 16], the total motor score in both the placebo and active arms improved substantially in a disorder where scores should worsen [17].

Variability further limits current measures. By their nature, assessments are subjective or when quasi-objective (e.g., counting finger taps in Parkinson disease) require human transformation of a quantitative signal to an ordinal ranking. The actual quantity and quality of taps are transformed into a categorical scale and precise measurements of the number, speed, or rhythmicity of the taps are lost. Intra-rater variability and, in multi-center trials, inter-rater variability further reduce precision. This variability reduces confidence in the replicability of findings, especially in early-stage clinical trials that may be underpowered for efficacy measures. An illustration of this variability can be drawn from recent studies in Parkinson disease. In two phase 2 Parkinson disease trials, the 12-month change in the total Unified Parkinson’s Disease Rating Scale was 6 points in the placebo arm in one study and 8 points in the second (33% higher) [7, 18, 19]. This difference occurred despite the use of the same sites, the same investigators, the same patient population, and very similar protocols.

To offset the variability in these scales, more observations are required. However, current scales are infrequently (e.g., monthly, quarterly) administered, and most trials rely on changes in these measures from the beginning to the end of the trial. The episodic assessments, all conducted in artificial environments at arbitrary times, limit the power of studies and may not even be an accurate representation of an individual’s true state [20]. Moreover, episodic assessments cannot reliably evaluate fluctuating events (e.g., the variable response to medication), capture rare incidents (e.g., falls), or assess behaviors that take place over long pe riods outside the examination room (e.g., everyday life activities) [21].

Digital biomarkers – the use of a biosensor to collect objective data on a biological (e.g., blood glucose, serum sodium), anatomical (e.g., mole size), or physiological (e.g., heart rate, blood pressure) parameter followed by the use of algorithms to transform these data into interpretable outcome measures [21] – can help address many of the shortcomings in current measures. These new measures, which include portable (e.g., smartphones), wearable, and implantable devices, are by their nature largely independent of raters. They are, therefore, not prone to rater bias. An early digital biomarker for neurodegenerative conditions is the “Q motor,” an in-clinic quantitative motor assessment of finger tapping, hand movements, and other tasks, which has been used in clinical trials in Parkinson [22] and Huntington disease [23]. In contrast to the total motor score, an investigator-rated assessment of motor signs in Huntington disease, the Q motor does not exhibit placebo effects [24].

Because digital biomarkers can include assessments in real-world environments, the variability in the assessments gleaned from digital biomarkers are likely to be greater than rating scales conducted in controlled clinical environments. For example, in a pilot study of wearable accelerometers in Huntington disease, the variability in various measures of gait was greater when assessed at home compared to those performed in clinic. However, the increased variability was more than offset by the increased frequency in assessments. Compared to the 20 gait assessments done in clinic, more than 14,000 gait assessments were captured over 1 week outside of the clinic [25]. The resulting immense increase in statistical power enabled the identification of multiple significant differences in gait between those with and without Huntington disease that were not detected based on the in-clinic assessments. The goal of digital biomarkers is to maximize the ecological validity and temporal and spatial resolution of capturing motor and nonmotor phenomena that are expected to change over time. As such, wearable technology may provide a more realistic portrayal of behaviors of interest in clinical and research settings [26].

Rigorous, independent evaluations of digital biomarkers are few. As of November 2015, at least 73 different devices (22 wearable, 38 nonwearable, and 13 hybrid) have been developed to assess Parkinson disease alone [27]. However, of the 38 nonwearable devices, only the Nintendo Wii Balance Board and GAITRite® gait analysis system were successfully used by groups besides their own developers [28, 29]. Besides lack of validation, additional reasons that may be contributing to the limited use of these technologies include skepticism by the medical community, inconsistent adherence by patients, discrepancy between research utility and clinical value, and lack of compatibility between systems, which hinders data integration and analytics [21, 30]. However, early studies suggest that a smartphone application in Parkinson disease can differentiate those with the disease from those without, be used to predict disease severity as assessed by traditional measures, and detect pharmacological effects of approved therapies [31]. In addition, where evidence of value has been generated [32-36], the US Food and Drug Administration has cleared some digital biomarkers for use [37, 38].

Experience using digital biomarkers as outcome measures in clinical trials is limited but growing fast. A trial of a nitrate in congestive heart failure recently published in The New England Journal of Medicine used an accelerometer as the primary outcome measure [39]. Its use was supported by traditional measures, such as a timed walking test and a quality of life measure, as secondary outcome measures. For neurodegenerative disorders, Roche recently incorporated a Parkinson disease smartphone application that measures gait, balance, finger tapping, and voice as an exploratory measure in a phase 1 clinical trial [40]. Such experimentation in clinical trials is needed to establish feasibility, collect data to demonstrate the validity of such tools alongside traditional measures, foster regulatory acceptance, and eventually provide data to determine the clinical meaningfulness of data captured from this new class of biomarkers.

In the short term, the initial application of these tools may be to support internal decision-making by pharmaceutical and medical device firms. Firms need objective, ideally sensitive, and frequent measures to support dose selection and to make go/no-go decisions on therapies while they are early in development. Traditional clinician-rated outcome measures have often failed to provide reliable guidance.

The need for that guidance is increasing as over the past 20 years, the number of deaths attributed to Parkinson disease globally has more than doubled and the number due to Alzheimer disease and other dementias has more than tripled [41]. Given that many of these neurodegenerative disorders, such as Parkinson disease, have external manifestations in cluding movement disorders, they lend themselves well to assessment by digital biomarkers. These biomarkers can also provide insights into how these diseases affect other domains of health from sleep to socialization [42, 43] that are poorly, or not, assessed with traditional scales. The failures of the past and the rising global burden of neurodegenerative disorders call for new tools for therapeutic development. Digital biomarkers are such a tool and can accelerate the evaluation of much needed therapies for the field.

The authors have no ethical conflicts to disclose.

Dr. Dorsey is a consultant to MC10, a wearable sensor company. Dr. Papapetropoulos is a full-time employee of TEVA Pharmaceuticals and serves as co-chair of the MDS Technology Task Force. Dr. Kieburtz has received research support the National Institutes of Health (NINDS), Michael J. Fox Foundation, and TEVA and has acted as a consultant for the National Institutes of Health (NINDS), Acorda, Astellas Pharma, AstraZeneca, BioMarin Pharmaceutica, Biotie, Britannia, CHDI, Clearpoint Strategy Group, Clintrex, Corium International, Cynapsus, Forward Pharma, Genzyme, INC Research, Intec, Lundbeck, Medivation, Melior Discovery, Neuroderm, Neurmedix, Orion Pharma, Otsuka, Pfizer, Pharma2B, Prana Biotechnology, Prothena/Neotope/Elan Pharmaceutical, Raptor Pharmaceuticals, Remedy Pharmaceuticals, Roche/Genentech, Sage Bionetworks, Sanofi, Serina, Sunovion, Synagile, Titan, Upsher-Smith, US WorldMeds, Vaccinex, Vertex Pharmaceuticals, and Weston Brain Institute.

Research for this paper was supported in party by NINDS (P20NS092529). Opinions and viewpoints expressed are those of the authors.

1.
Adams CP, Brantner VV: Estimating The cost of new drug development: is it really $802 million? Health Aff (Millwood) 2006; 25: 420–428.
2.
He W, Goodkind D, Kowal P: An Aging World: 2015. Washington, U.S. Government Publishing Office, 2016, DOI: P95/09–1.
3.
Skripka-Serry J: The great neuro-pipeline “brain drain” (and why Big Pharma hasn’t given up on CNS disorders). 2013. http://www.ddw-online.com/therapeutics/p216813-the-great-neuro-pipeline-brain-drain-(and-why-big-pharma-hasn-t-given-up-on-cns-disorders)-fall-13.html (accessed January 25, 2017).
4.
Viele K, McGlothlin A, Broglio K: Interpretation of clinical trials that stopped early. JAMA 2016; 315: 1646.
5.
Huntington Study Group (HSG): Announcement of CREST-E Early Study Closure. 2014. http://huntingtonstudygroup.org/tag/crest-e/ (accessed January 25, 2017).
6.
Huntington Study Group (HSG): Announcement of 2CARE Early Study Closure. http://huntingtonstudygroup.org/tag/2care/ (accessed January 25, 2017).
7.
Kieburtz K, Tilley BC, Elm JJ, et al: Effect of creatine monohydrate on clinical progression in patients with Parkinson disease. JAMA 2015; 313: 584.
8.
Thomas D, Wessel C: Venture funding of therapeutic innovation; in: Bio Industry Analysis. Washington, Biotechnology Industry Organization (BIO), 2015.
9.
Ge Y: Multiple sclerosis: the role of MR imaging. AJNR Am J Neuroradiol 2006; 27: 1165–1176.
10.
Ravina B, Eidelberg D, Ahlskog JE, et al: The role of radiotracer imaging in Parkinson disease. Neurology 2005; 64: 208–215.
11.
Rosen WG, Mohs RC, Davis KL: A new rating scale for Alzheimer’s disease. Am J Psychiatry 1984; 141: 1356–1364.
12.
Goetz CG, Tilley BC, Shaftman SR, et al: Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS): Scale presentation and clinimetric testing results. Mov Disord 2008; 23: 2129–2170.
13.
Palmer JL, Coats MA, Roe CM, Hanko SM, Xiong C, Morris JC: Unified Parkinson’s Disease Rating Scale-Motor Exam: inter-rater reliability of advanced practice nurse and neurologist assessments. J Adv Nurs 2010; 66: 1382–1387.
14.
Post B, Merkus MP, de Bie RMA, de Haan RJ, Speelman JD: Unified Parkinson’s Disease Rating Scale Motor Examination: are ratings of nurses, residents in neurology, and movement disorders specialists interchangeable? Mov Disord 2005; 20: 1577–1584.
15.
The Huntington Study Group HART Investigators: A randomized, double-blind, placebo-controlled trial of pridopidine in Huntington’s disease. Mov Disord 2013; 28: 1407–1415.
16.
de Yebenes JG, Landwehrmeyer B, Squitieri F, et al: Pridopidine for the treatment of motor function in patients with Huntington’s disease (MermaiHD): a phase 3, randomised, double-blind, placebo-controlled trial. Lancet Neurol 2011; 10: 1049–1057.
17.
Dorsey ER, Beck CA, Darwin K, et al: Natural history of Huntington disease. JAMA Neurol 2013; 70: 1520–1530.
18.
Kieburtz K, Tilley BC, Elm JJ, et al: Effect of creatine monohydrate on clinical progression in patients with Parkinson disease. JAMA 2015; 313: 584.
19.
NINDS NET-PD Investigators: A randomized, double-blind, futility clinical trial of creatine and minocycline in early Parkinson disease. Neurology 2006; 66: 664–671.
20.
NINDS NET-PD Investigators: A randomized clinical trial of coenzyme Q10 and GPI-1485 in early Parkinson disease. Neurology 2007; 68: 20–28.
21.
Torous J, Staples P, Shanahan M, et al: Utilizing a personal smartphone custom app to assess the Patient Health Questionnaire-9 (PHQ-9) depressive symptoms in patients with major depressive disorder. JMIR Ment Health 2015; 2:e8.
22.
Espay AJ, Bonato P, Nahab FB, et al: Technology in Parkinson’s disease: challenges and opportunities. Mov Disord 2016; 31: 1272–1282.
23.
Maetzler W, Ellerbrock M, Heger T, Sass C, Berg D, Reilmann R: Digitomotography in Parkinson’s disease: a cross-sectional and longitudinal study. PLoS One 2015; 10:e0123914.
24.
Sampaio C, Borowsky B, Reilmann R: Clinical trials in Huntington’s disease: interventions in early clinical development and newer methodological approaches. Mov Disord 2014; 29: 1419–1428.
25.
Reilmann R, Rouzade-Dominguez M-L, Saft C, et al: A randomized, placebo-controlled trial of AFQ056 for the treatment of chorea in Huntington’s disease. Mov Disord 2015; 30: 427–431.
26.
Andrzejewski KL, Dowling AV, Stamler D, et al: Wearable sensors in Huntington disease: a pilot study. J Huntingtons Dis 2016; 5: 199–206.
27.
Stamford JA, Schmidt PN, Friedl KE: What engineering technology could do for quality of life in Parkinson’s disease: a review of current needs and opportunities. IEEE J Biomed Health Inform 2015; 19: 1862–1872.
28.
Godinho C, Domingos J, Cunha G, et al: A systematic review of the characteristics and validity of monitoring technologies to assess Parkinson’s disease. J Neuroeng Rehabil 2016; 13: 24.
29.
Synnott J, Chen L, Nugent CD, Moore G: WiiPD – an approach for the objective home assessment of Parkinson’s disease. Conf IEEE Proc Eng Med Biol Soc 2011; 2011: 2388–2391.
30.
Nelson AJ, Zwick D, Brody S, et al: The validity of the GaitRite and the Functional Ambulation Performance scoring system in the analysis of Parkinson gait. NeuroRehabilitation 2002; 17: 255–262.
31.
Maetzler W, Klucken J, Horne M: A clinical view on the development of technology-based tools in managing Parkinson’s disease. Mov Disord 2016; 31: 1263–1271.
32.
Arora S, Venkataraman V, Zhan A, et al: Detecting and monitoring the symptoms of Parkinson’s disease using smartphones: a pilot study. Parkinsonism Relat Disord 2015; 21: 650–653.
33.
Ossig C, Gandor F, Fauser M, et al: Correlation of quantitative motor state assessment using a kinetograph and patient diaries in advanced PD: data from an observational study. PLoS One 2016; 11:e0161559.
34.
Kotschet K, Johnson W, McGregor S, et al: Daytime sleep in Parkinson’s disease measured by episodes of immobility. Parkinsonism Relat Disord 2014; 20: 578–583.
35.
Griffiths RI, Kotschet K, Arfon S, et al: Automated assessment of bradykinesia and dyskinesia in Parkinson’s disease. J Parkinsons Dis 2012; 2: 47–55.
36.
Mera TO, Heldman DA, Espay AJ, Payne M, Giuffrida JP: Feasibility of home-based automated Parkinson’s disease motor assessment. J Neurosci Methods 2012; 203: 152–156.
37.
Heldman DA, Espay AJ, LeWitt PA, Giuffrida JP: Clinician versus machine: reliability and responsiveness of motor endpoints in Parkinson’s disease. Parkinsonism Relat Disord 2014; 20: 590–595.
38.
Global Kinetics Corporation. Global Kinetics Corporation Announces FDA Clearance of the Personal KinetiGraphTM for Assessment of Parkinson’s Disease Symptoms. 2014. http://www.prnewswire.com/news-releases/global-kinetics-corporation-announces-fda-clearance-of-the-personal-kinetigraph-for-assessment-of-parkinsons-disease-symptoms-274596041.html (accessed January 31, 2017).
39.
Kinesia Objective Motor Assessment. Press Release: Great Lakes NeuroTech Enters Medical Device App Market With Kinesia One For Monitoring Parkinson’s Disease. 2014. http://glneurotech.com/kinesia/pr-kinesia-one-launch/ (accessed January 31, 2017).
40.
Redfield MM, Anstrom KJ, Levine JA, et al: Isosorbide mononitrate in heart failure with preserved ejection fraction. N Engl J Med 2015; 373: 2314–2324.
41.
Roche app measures Parkinson’s disease fluctuations. http://www.roche.com/media/store/roche_stories/roche-stories-2015-08-10.htm (accessed January 27, 2017).
42.
The Lancet Neurology: Joining forces to fight neurodegenerative diseases. Lancet Neurol 2013; 12: 119.
43.
Raggi A, Ferri R: Sleep disorders in neurodegenerative diseases. Eur J Neurol 2010; 17: 1326–1338.
44.
Jedrziewski MK, Ewbank DC, Wang H, Trojanowski JQ: The Impact of exercise, cognitive activities, and socialization on cognitive function: results from the National Long-Term Care Survey. Am J Alzheimers Dis Other Demen 2014; 29: 372–378.
45.
Tufts Center for the Study of Drug Development: CNS Drugs Take Longer to Develop and Have Lower Success Rates than Other Drugs. 2014. http://csdd.tufts.edu/news/complete_story/pr_ir_nov_dec_ir (accessed Jan uary 31, 2017).
46.
Farlow M, Arnold SE, van Dyck CH, et al: Safety and biomarker effects of solanezumab in patients with Alzheimer’s disease. Alzheimers Dement 2012; 8: 261–271.
47.
Eli Lilly and Company: Lilly Announces Top-Line Results of Solanezumab Phase 3 Clinical Trial. 2016. https://investor.lilly.com/releasedetail.cfm?ReleaseID=1000871 (accessed January 31, 2017).
48.
Baillieul R: Eli Lilly And Co: This is Why LLY Stock Got Crushed Today. 2016. https://www.incomeinvestors.com/eli-lilly-co-lly-stock-got-crushed-today/9107/ (accessed January 31, 2017).
49.
Science Daily. Results of 9-Month Phase II Study of Gammagard Intravenous Immunoglobulin. 2008. https://www.sciencedaily.com/releases/2008/07/080730175522.htm (accessed January 31, 2017).
50.
Dang M: Study Seeks Cure for Alzheimer’s. 2007. http://cornellsun.com/2007/09/11/study-seeks-cure-for-alzheimers/ (accessed January 31, 2017).
51.
Jeffrey S: IVIG Fails in Phase 3 for Alzheimer’s. 2013. http://www.medscape.com/viewarticle/803724 (ac cessed January 31, 2017).
52.
Cudkowicz M, Bozik ME, Ingersoll EW, et al: The effects of dexpramipexole (KNS-760704) in individuals with amyotrophic lateral sclerosis. Nat Med 2011; 17: 1652–1656.
53.
Cudkowicz ME, van den Berg LH, Shefner JM, et al: Dexpramipexole versus placebo for patients with amyotrophic lateral sclerosis (EMPOWER): a randomised, double-blind, phase 3 trial. Lancet Neurol 2013; 12: 1059–1067.
54.
McGarry A, McDermott M, Kieburtz K, et al: A randomized, double-blind, placebo-controlled trial of coenzyme Q10 in Huntington disease. Neurology 2017; 88: 152–159.
55.
Huntington Study Group: A randomized, placebo-controlled trial of coenzyme Q10 and remacemide in Huntington’s disease. Neurology 2001; 57: 397–404.
56.
clinicaltrials.gov. Coenzyme Q10 in Huntington’s Disease (HD) (2CARE). 2016. https://clinicaltrials.gov/ct2/show/NCT00608881 (accessed January 31, 2017).
57.
Kieburtz K, McDermott MP, Voss TS, et al: A randomized, placebo-controlled trial of latrepirdine in Huntington disease. Arch Neurol 2010; 67: 154.
58.
HORIZON Investigators of the Huntington Study Group and European Huntington’s Disease Network: A randomized, double-blind, placebo-controlled study of latrepirdine in patients with mild to moderate Huntington disease. JAMA Neurol 2013; 70: 25.
59.
Beal MF, Oakes D, Shoulson I, et al. A randomized clinical trial of high-dosage coenzyme Q10 in early Parkinson disease. JAMA Neurol 2014; 71: 543.
60.
Shults CW, Oakes D, Kieburtz K, et al. Effects of coenzyme Q10 in early Parkinson disease: evidence of slowing of the functional decline. Arch Neurol 2002; 59: 1541–1550.
61.
Murray CJL, Atkinson C, Bhalla K, et al. The state of US health, 1990-2010. JAMA 2013; 310: 591.
Open Access License / Drug Dosage / Disclaimer
This article is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND). Usage and distribution for commercial purposes as well as any distribution of modified material requires written permission. Drug Dosage: The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any changes in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug. Disclaimer: The statements, opinions and data contained in this publication are solely those of the individual authors and contributors and not of the publishers and the editor(s). The appearance of advertisements or/and product references in the publication is not a warranty, endorsement, or approval of the products or services advertised or of their effectiveness, quality or safety. The publisher and the editor(s) disclaim responsibility for any injury to persons or property resulting from any ideas, methods, instructions or products referred to in the content or advertisements.