Purpose: To evaluate the diagnostic accuracy of a diagnostic system software for the automated screening of diabetic retinopathy (DR) on digital colour fundus photographs, the 2019 Convolutional Neural Network (CNN) model with Inception-V3. Methods: In this cross-sectional study, 295 fundus images were analysed by the CNN model and compared to a panel of ophthalmologists. Images were obtained from a dataset acquired within a screening programme. Diagnostic accuracy measures and respective 95% CI were calculated. Results: The sensitivity and specificity of the CNN model in diagnosing referable DR was 81% (95% CI 66–90%) and 97% (95% CI 95–99%), respectively. Positive predictive value was 86% (95% CI 72–94%) and negative predictive value 96% (95% CI 93–98%). The positive likelihood ratio was 33 (95% CI 15–75) and the negative was 0.20 (95% CI 0.11–0.35). Its clinical impact is demonstrated by the change observed in the pre-test probability of referable DR (assuming a prevalence of 16%) to a post-test probability for a positive test result of 86% and for a negative test result of 4%. Conclusion: A CNN model negative test result safely excludes DR, and its use may significantly reduce the burden of ophthalmologists at reading centres.

Human grading telemedicine-based screening of diabetic retinopathy (DR) is the method of choice in different parts of the world [1] to early identify people with DR, so that they can be effectively treated, and, therefore, avoid vision loss. Regular screening for detection of DR is recommended for all individuals with diabetes [1]. Digital-based colour fundus images are acquired with non-mydriatic retinal cameras operated by a trained eye technician in local settings, usually at primary care offices [1], envisioning a higher rate of patient compliance [2]. Subsequent remote image evaluation by a retina specialist [3, 4] helps to deal with shortage of ophthalmologists [5]. Patients are referred to a retina specialised clinic if clinical signs of severe non-proliferative DR, proliferative DR or moderate macular oedema are shown on retinal photographs [6]. This specific condition is commonly named as referral DR. However, telescreening has some potential vulnerabilities, namely the complex and time-consuming task of interpreting eye fundus images for DR diagnosis and the low reliability between ophthalmologists in this task, varying from weak to substantial (k 0.19–0.75) [7, 8].

DR affects 34.6% of individuals with diabetes [9]. As this number is expected to increase from 382 million in 2013 to 592 million in 2035 [10], the interest in automated methods increased over the last years [11]. Current research includes the estimation of diagnostic accuracy of deep learning-based diagnostic systems [12]. The Convolutional Neural Network (CNN) model with Inception-V3 is an artificial neural network based on deep learning for automated detection of DR [13]. In a previous pilot study to determine preliminary safety and performance with individuals with diabetes referred to ophthalmology because of DR, the software achieved a sensitivity of 74% and a specificity of 95% [14]. Nevertheless, this innovative diagnostic test needs to be clinically validated in people with the target disease in similar conditions to those in which the test is intended to be used [15, 16]. For this purpose, sensitivity, specificity, predictive values and likelihood ratios should be used [15, 16]. This information is relevant for clinicians who want to apply the findings of the study to decide whether to adopt the test. We conducted a study to evaluate the diagnostic accuracy and clinical usefulness of the software in a larger population of individuals with diabetes, with and without DR, in a screening context.

Study Design and Image Selection

A cross-sectional observational study was conducted using anonymised retinal images of individuals with diabetes from EyePACS, a publicly available dataset from primary care offices in the United States of America [17]. Images were non-stereoscopic, acquired by trained camera operators with a variety of non-mydriatic digital retinal cameras, including Canon CR-DGi and Canon CR-1, and with selective pupil dilation [17, 18]. A study sample of 350 images was randomly selected from the database containing 53,571 images.

Sample size calculation was made for a binary test outcome and the following aspects: an expected sensitivity of at least 70% [14] (given the result of a preliminary study with the neural network and also because this value is considered to be the lowest value accepted for screening tools [19]), a confidence level of 95%, a type I error of 5%, a power of 95%, a type II error of 12%, and a prevalence of DR of 20% [20]. Accordingly, the required sample size was composed of 286 images. Considering a median of image rejection rate of 18.3% [3] for fundus images because they were ungradable, the sample size was estimated as 350 images.

Index Test

The 2019 CNN model with Inception-V3 in EyePACS data is an automated diagnostic system based on Deep Learning trained in EyePACS and Messidor datasets that automatically identifies DR in digital colour fundus images [13]. It provides a dichotomised classification as presence or absence of DR or referable DR. The process of image analysis is fully automatic and was performed on a computer.

Reference Standard

The reference standard was established by taking the majority decision from 3 independent graders: the eye care specialist of the EyePACS, one general ophthalmologist and one retina specialist. They used a modified version of the International Clinical Disease Severity Scale (ICDSS) for DR, that was derived from the Early Treatment of Diabetic Retinopathy Study (ETDRS) and is used in most DR screenings [21]. Accordingly, DR was classified into 4 severity levels: without retinopathy (R0), mild non-proliferative (R1), moderate or severe pre-proliferative (R2) and proliferative (R3). Diabetic macular oedema was classified as M1. Images deemed not classifiable by at least one grader were not included in the study. Images graded as R0 were classified as negative for the presence of DR and images graded as R1, R2 or R3 or M1 as positive for DR; R0 and R1 were classified as negative for the presence of referable DR and R2, R3 or M1 as positive.

Analysis

The software results were compared to those of the clinical reference standard. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV), with 95% CI were calculated as measures of diagnostic accuracy in accordance with the STARD 2015 guidelines for reporting diagnostic accuracy studies [22]. Sensitivity is the proportion of images identified with the disease in those with a positive test result, and specificity is the proportion of images identified without the disease in those with a negative test result. PPV is the proportion of images with a positive result that have the disease, and NPV is the proportion of images with a negative result that do not have the disease.

The likelihood ratios were also calculated because they give an estimation of how likely it is for an individual with diabetes to have DR/referable DR, thereby helping clinical decision-making [16]. The positive likelihood ratio (LR+) of a test corresponds to a ratio between the proportion of true positives (i.e., sensitivity) and the proportion of false positives (i.e., 1 – specificity) [16]. In contrast, the negative likelihood ratio (LR–) of a test corresponds to a ratio between the proportion of false negatives (i.e., 1 – sensitivity) and the proportion of true negatives (i.e., specificity) [16].

The clinical usefulness of the software test was assessed through the extent to which it helps to modify the pre-test probability of occurrence of DR/referable DR in individuals with diabetes [16]. The post-test probability was calculated using the graphical tool Fagan’s nomogram, knowing the pre-test probability (i.e., the prevalence of disease in the study population) and the likelihood ratios [16].

Of the 350 fundus images, 55 (16%) were classified as ungradable by at least one of the ophthalmologists and were excluded. All classifiable images had software automated classification as described in Figure 1.

Fig. 1.

Flow of participants and tests results. DR, diabetic retinopathy; DM, diabetes mellitus.

Fig. 1.

Flow of participants and tests results. DR, diabetic retinopathy; DM, diabetes mellitus.

Close modal

The prevalence of DR was 15.9% (95% CI 12.0–20.7%). The software correctly identified 38 of the 47 (81%) analysable images with any DR and 38 of 47 (81%) with referable DR. Among the 248 analysable images with no DR according to the panel of ophthalmologists, there were 237 (96%) and 242 (98%) images with DR absent output for any DR and referable DR, respectively. There were 11 false positives and 9 false negatives for any DR, and 6 false positives and 9 false negatives for referable DR. The software and the reference standard classification for all 295 images are described in Table 1.

Table 1.

Diagnosis provided by the CNN model with Inception-V3 compared to the reference standard

 Diagnosis provided by the CNN model with Inception-V3 compared to the reference standard
 Diagnosis provided by the CNN model with Inception-V3 compared to the reference standard

Forty-seven images had DR according to ophthalmologists grading, and sensitivity and specificity of the software in identifying DR were of 80.8 and 95.6%, respectively, with PPV and NPV of 77.6 and 96.3%, respectively. Although some of the values were higher when the clinical outcome was referable DR, no significant statistical differences were found in the observed accuracy measures between the identification of DR or referable DR. Table 2 shows the calculated sensitivity, specificity, PPV, NPV, LR+ and LR– for the software for screening of DR and referable DR with respective 95% CI.

Table 2.

CNN model with Inception-V3 diagnostic accuracy for DR when compared to ophthalmologist’s classification

 CNN model with Inception-V3 diagnostic accuracy for DR when compared to ophthalmologist’s classification
 CNN model with Inception-V3 diagnostic accuracy for DR when compared to ophthalmologist’s classification

For DR, the LR+ was of 18.2, while the LR– was 0.20. When using Fagan’s nomogram, a straight line from a patient’s pre-test probability (16%) is drawn through the LR for a positive and for a negative test result, which points to the post-test probability of DR for both cases. This led to a post-test probability for a positive test result of 74% and a post-test probability for a negative test result of 3%, as shown in Figure 2. The absolute difference between pre- and post-test probabilities of DR is 0.58 for positive and 0.13 for negative test results. Similarly, for referable DR, the LR+ was of 33, while the LR– was of 0.20. Given an estimated prevalence of 16%, if the patient tests positive, the post-test probability that he/she has truly referable DR would be 86%; if the patient tests negative, the post-test probability that he/she has truly referable DR would be 4%. The absolute difference between pre- and post-test probabilities of referable DR is 0.70 for positive and 0.12 for negative test results.

Fig. 2.

Fagan’s nomogram. a Diabetic retinopathy: pre-test probability = 16%. A positive test result leads to an increase in the post-test probability to 74%. A negative test result leads to a decrease in the post-test probability to 3%. b Referable diabetic retinopathy: pre-test probability = 16%. A positive test result leads to an increase in the post-test probability to 86%. A negative test result leads to a decrease in the post-test probability to 4%.

Fig. 2.

Fagan’s nomogram. a Diabetic retinopathy: pre-test probability = 16%. A positive test result leads to an increase in the post-test probability to 74%. A negative test result leads to a decrease in the post-test probability to 3%. b Referable diabetic retinopathy: pre-test probability = 16%. A positive test result leads to an increase in the post-test probability to 86%. A negative test result leads to a decrease in the post-test probability to 4%.

Close modal

Diagnostic automation for DR using artificial intelligence will likely become essential as our rapidly ageing societies and increasing trends in diabetes prevalence continue to challenge healthcare systems with more demand for observation of fundus images. With the clinical use of the CNN model with Inception-V3 automated diagnostic system for DR, we can expect 85 fewer observations by ophthalmologists at reading centres, at a <4% probability of missing referable DR. The use of the CNN model for DR referral triage classification would safely enable to reduce ophthalmologists’ workload. If we were to refer all images automatically classified as DR, this would result in a slightly higher number of observations needed when compared with sending referable DR only (reduction of 83 vs. 85%) without an important impact on the proportion of false negatives. Therefore, we consider that the best artificial intelligence screening approach would be for ophthalmologists at the reading centre to receive images classified as referable DR-positives by the software. In this way, clinical practices could easily accommodate the expected increase in the number of required screenings worldwide. If this software is used at point-of-care at the time of image acquisition, it has the additional advantage of immediate communication of test results to the patient and more rapid observation of screen-positive patients by an ophthalmologist.

The CNN model with Inception-V3 demonstrated an 81% sensitivity for the detection of referable DR, which is considered to be safe according to the NHS’ Exeter Standard, being above the established 80% as the minimum sensitivity [23]. Only 1 in 10 patients with referable DR (4% of 38 patients) was missed by the software. Also, with a specificity of 98% for referable DR diagnosis, it exceeded the recommended value of 95% [23], showing that it correctly excludes people without the disease with 2% of false positives (6 in 248). This is an important attribute as the individuals screened positive will be advised to go to a retina specialised clinic, which represents a waste of resources. Moreover, the high specificity enables us to conclude that if a person tests positive with the software, it rules in referable DR. Values of sensitivity and specificity observed for DR were similar to the ones obtained for referable DR.

Several studies report sensitivities that vary from 87 to 97% and specificities from 59 to 98% [24]. A recent meta-analysis showed that deep learning algorithms perform well in screening for DR, having a pooled area under the receiving operating curve of 0.97 (95% CI 0.95–0.98), and a pooled sensitivity and specificity of 83% (95% CI 83–83%) and 92% (95% CI 92–92%), respectively, for detecting referable DR from fundus images, with additional reduction in misdiagnosis [25]. Studies reporting diagnostic accuracy measures of other deep learning algorithms, in the same population and using the EyePACS dataset, have reported sensitivities that ranged from 30 to 100% and from 82 to 92% for DR and referable DR, respectively, and specificities that varied from 85 and 99% and from 71 to 97% for DR and referable DR, respectively [25, 26]. When compared to the EyeArt, which showed a sensitivity and a specificity of 91% (95% CI 90.9–91.7%) for referable DR detection, the CNN model with Inception-V3 had statistically lower sensitivity, but statistically higher specificity. The IDx-DR artificial intelligence diagnostic system, clinically validated in the same population, had similar values of both sensitivity and specificity to the CNN model with Inception-V3, with a sensitivity of 87% and a specificity of 90%, therefore both higher than the minimum set by the FDA: 85 and 82.5% for sensitivity and specificity, respectively [27]. Deep learning algorithms have also been applied recently to retinal optical coherence tomography (OCT) and OCT angiography (OCTA) images for automated diagnosis of DR, because these imaging techniques provide higher sensitivity to detect early diabetic retinal changes [28]. Diagnostic accuracy studies have not been published yet, but preliminary internal validation tests show comparable diagnostic accuracy to the one using fundus images: Sandhu et al. [29] reported their algorithm to have sensitivity of 93% and specificity of 95%, and the work of Li et al. [30] achieved a sensitivity of 90% and specificity of 95%; using OCTA images, sensitivity of 98% and specificity of 87% were reported by Sandhu et al. [31], and the system of Le et al. [32] achieved 84% sensitivity and 91% specificity.

One measure that can specifically help the clinician to estimate the probability of an individual to have the disease is the predictive values [16]. A high NPV was achieved with the software – 96%, indicating the probability for a patient not to have referable DR given that the test result is negative. This value gives very high certainty to safely exclude referable DR in subjects screened negative with the CNN model with Inception-V3 and thus they do not need to be sent to a medical appointment with an ophthalmologist. A PPV of 86% reveals the probability for a positive test result to indicate the presence of referable DR, which is a good value in screening settings, where prevalence of the disease is low, making the PPV decrease [33]. To the best of our knowledge, only 2 other artificial intelligence systems described predictive values: the CNN model presented higher PPV (86 vs. 72%) than the EyeArt, and slightly lower NPV (96 vs. 98%), despite EyeArt’s values were under our CIs; the study of Kanagasingam et al. [34] reported an NPV of 100%, but a PPV of only 12% (95% CI 8–18%), which is significatively lower compared to the CNN model and demonstrating high false-positive rate, despite the high specificity of 92% (95% CI 87–96%) that is not significatively higher than the CNN model’s specificity.

When a clinician needs to interpret the test result for the individual patient in a different population, likelihood ratios, which are independent of prevalence and inherent to the test, are advantageous. A diagnostic test will be more useful to the extent that its positive likelihood ratio is of greater magnitude and that its negative likelihood ratio is smaller. In our study, for referable DR, the positive likelihood ratio was of 33 and negative likelihood ratio 0.20. Both values are statistically significantly higher than the ones reported in a systematic review: 14.11 (95% CI 9.91–20.07), and 0.10 (95% CI 0.07–0.16), for positive and negative likelihood ratios, respectively [25]. Because clinicians are more familiar with thinking in terms of probabilities, using Fagan’s nomogram to translate likelihood ratios into post-test probabilities, the result showed that after the CNN model with Inception-V3 positive result, the probability of a patient having referable DR has increased from 16 to 86%. This probability is high and indicates that referable DR is likely [35, 36] and, therefore, observation by an ophthalmologist is recommended. This value is not very high though, which is not surprising, since false positives are expected in screening. Considering the recommendation that a clinician should be at least 90% certain to make a definite diagnosis [37], and that the software has screening purposes, 86% seems to provide high certainty in relation to the presence of disease and adequateness in sending the patient for diagnosis confirmation in further observation. Likewise, with a negative CNN model with Inception-V3 result, the probability of referable DR shifts from 20 to 4%, and the disease can be excluded with reasonable certainty.

The CNN model with Inception-V3 demonstrated a good performance regarding diagnostic accuracy as an artificial intelligence screening test for referable DR, identifying and excluding disease with a good balance, enabling few both false positives and negatives. Predictive values, likelihood ratios and the post-test probability of disease showed that the software supports clinicians in determining a patient’s care pathway in screening of referable DR, helping to estimate the probability of an individual patient having or not having the disease. As such, all measures of diagnostic accuracy converge to demonstrate the adequacy and clinical usefulness of the CNN model with Inception-V3, with safety in screening decisions about individual people and advantages for healthcare systems.

A limitation of our study was that the reference standard panel was not exclusively composed of retina specialists, as one was a general ophthalmologist. Literature shows lower levels of agreement between general ophthalmologists when compared to the agreement between retina specialists [7, 8]. Additionally, the DR prevalence of the study population (16%) was lower than the value used for sample size calculation (20%); these could have affected the power of the test, with wider CIs and subsequent decrease in precision. However, the CI for prevalence includes the prevalence used for the sample size calculation. On the other hand, we assumed a sensitivity of 70% for sample size calculation, having taken into account a previous study [14] with an old version of the algorithm and conducted with patients with DR screen-positives only, which may have led to a higher precision.

Further work is needed to validate the CNN model with Inception-V3, envisioning its implementation in clinical practice, namely, through the conduction of a multicentre study with patients with diabetes followed at primary care institutions, comparing its results against a panel of retina specialists as reference standard. Assessment of its effectiveness in real-world clinical settings is also desirable. Future research should consider investigating the diagnostic accuracy of the software on retinal images acquired with different fundus cameras.

This study analysed the properties of the 2019 CNN model with Inception-V3 artificial intelligence-based automated classification of single-field digital eye fundus images acquired with table-top fundus cameras for referable DR screening. We concluded that the CNN model based on Deep Learning compared favourably with the reference standard and met the pre-established values of the Food and Drug Administration and the NHS’ Exeter Standard. Therefore, this diagnostic test accuracy study provides evidence that the software correctly identifies and rules out DR and justifies a subsequent multicentre prospective diagnostic accuracy study with an independent cohort of individuals with diabetes to enable generalization of results. Considering the clinical application of this diagnostic test for screening, it gives the clinician strong evidence to support ruling out referable DR in images classified as negative, with few false negatives. Our results suggest that the integration of the CNN model with Inception-V3 into the DR screening workflow shows promise in alleviating the great demanding and time-consuming task of ophthalmologists at reading centres.

The authors acknowledge Telmo Barbosa, MSc, Fraunhofer Portugal AICOS, for the development and management of the web annotation tool for classification of retinal images by ophthalmologists, and Tânia Borges, MD, MSc, and Gustavo Bacelar-Silva, MD, MSC, for providing image classifications. We acknowledge João Gonçalves, MSc, Fraunhofer Portugal AICOS, for providing the CNN model with Inception-V3 outputs, Ricardo Graça, MSc, Fraunhofer Portugal AICOS, for inputs on data preparation, and Ana Correia de Barros, PhD, Fraunhofer Portugal AICOS, for grammatical and stylistic revision of the manuscript. We also acknowledge Kaggle Inc. (https://www.kaggle.com/c/diabetic-retinopathy-detection/data) and EyePACS, LLC (http://www.eyepacs.com) for providing the eye fundus images dataset used in this study.

Ethical approval was not required or obtained for this study, because we used a database of retinal images collected by EyePACS, LLC and publicly available through Kaggle Inc.

S.R. and F.S. are employees of Fraunhofer Portugal AICOS, Porto, Portugal, and this institution is developing a decision-support system for diabetic retinopathy automatic classification. M.M.-S. and M.D.-M. have no financial involvement with Fraunhofer Portugal AICOS and no competing interests. No other relationships or activities could appear to have influenced the submitted work.

This work was supported by Fraunhofer Portugal AICOS (Porto, Portugal); the development of the web-platform for classification of retinal images was supported by Project MDevNet – National Network for Transfer of Knowledge of Medical Devices, in the scope of the Portuguese national programme NORTE 2020 under Portugal 2020. The sponsor or funding organization had no role in the design or conduct of this research.

S.R. conception and design, acquisition, analysis and interpretation of data, article draft, final approval of the version to be published, agreement to be accountable for all aspects of the work. M.D.-M. interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work. F.S. conception and design, acquisition and interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work. M.M.-S. conception and design, interpretation of data, article revision, final approval of the version to be published, agreement to be accountable for all aspects of the work.

1.
WHO
. | Prevention of blindness from diabetes mellitus. WHO. [cited 2017 Nov 7]. Available from: http://www.who.int/diabetes/publications/prevention_diabetes2006/en/
2.
Lewis
K
.
Improving patient compliance with diabetic retinopathy screening and treatment
.
Community Eye Health
.
2015
;
28
(
92
):
68
9
.
[PubMed]
0953-6833
3.
Browning
DJ
, editor
.
Diabetic Retinopathy: Evidence-Based Management
.
New York
:
Springer-Verlag
;
2010
., Available from https://www.springer.com/gp/book/9780387858999[ [cited 2020 Jul 1]].
4.
Das
T
,
Raman
R
,
Ramasamy
K
,
Rani
PK
.
Telemedicine in diabetic retinopathy: current status and future directions
.
Middle East Afr J Ophthalmol
.
2015
Apr-Jun
;
22
(
2
):
174
8
.
[PubMed]
0974-9233
5.
Review of Diabetic Retinopathy Screening Methods and Programmes Adopted in Different Parts of the World - touchOPHTHALMOLOGY. [cited 2020 Jan 13]. Available from: https://www.touchophthalmology.com/review-of-diabetic-retinopathy-screening-methods-and-programmes-adopted-in-different-parts-of-the-world/
6.
Diabetic retinopathy (DR): management and referral
.
Community Eye Health
.
2015
;
28
(
92
):
70
1
.
[PubMed]
0953-6833
7.
Ruamviboonsuk
P
,
Teerasuwanajak
K
,
Tiensuwan
M
,
Yuttitham
K
;
Thai Screening for Diabetic Retinopathy Study Group
.
Interobserver agreement in the interpretation of single-field digital fundus images for diabetic retinopathy screening
.
Ophthalmology
.
2006
May
;
113
(
5
):
826
32
.
[PubMed]
0161-6420
8.
Gegundez-Arias
ME
,
Ortega
C
,
Garrido
J
,
Ponte
B
,
Alvarez
F
,
Marin
D
. Inter-observer Reliability and Agreement Study on Early Diagnosis of Diabetic Retinopathy and Diabetic Macular Edema Risk. In:
Ortuño
F
,
Rojas
I
, editors
.
Bioinformatics and Biomedical Engineering. Springer International Publishing
.
Lecture Notes in Computer Science
.
2016
. pp.
369
79
.
9.
Yau
JW
,
Rogers
SL
,
Kawasaki
R
,
Lamoureux
EL
,
Kowalski
JW
,
Bek
T
, et al;
Meta-Analysis for Eye Disease (META-EYE) Study Group
.
Global prevalence and major risk factors of diabetic retinopathy
.
Diabetes Care
.
2012
Mar
;
35
(
3
):
556
64
.
[PubMed]
0149-5992
10.
Guariguata
L
,
Whiting
DR
,
Hambleton
I
,
Beagley
J
,
Linnenkamp
U
,
Shaw
JE
.
Global estimates of diabetes prevalence for 2013 and projections for 2035
.
Diabetes Res Clin Pract
.
2014
Feb
;
103
(
2
):
137
49
.
[PubMed]
0168-8227
11.
Korot
E
,
Wood
E
,
Weiner
A
,
Sim
DA
,
Trese
M
.
A renaissance of teleophthalmology through artificial intelligence
.
Eye (Lond)
.
2019
Jun
;
33
(
6
):
861
3
.
[PubMed]
0950-222X
12.
Ting
DS
,
Pasquale
LR
,
Peng
L
,
Campbell
JP
,
Lee
AY
,
Raman
R
, et al
.
Artificial intelligence and deep learning in ophthalmology
.
Br J Ophthalmol
.
2019
Feb
;
103
(
2
):
167
75
.
[PubMed]
0007-1161
13.
Gonçalves
J
,
Conceição
T
,
Soares
F
.
Inter-observer Reliability in Computer-aided Diagnosis of Diabetic Retinopathy.
In
2020
[cited 2020 Jan 13]. p. 481–91. Available from: https://www.scitepress.org/PublicationsDetail.aspx?ID=DKAg0ha196k=&t=1
14.
Felgueiras
, et al
- 2018 - Mobile-based Risk Assessment of Diabetic Retinopat.pdf. [cited 2020 Mar 16]. Available from: https://www.scitepress.org/Papers/2018/65997/65997.pdf
15.
Cerda
LJ
,
Cifuentes
AL
.
[Clinical use of diagnostic tests (Part 1): Analysis of the properties of a diagnostic test]
.
Rev Chil Infectologia Organo Of Soc Chil Infectologia
.
2010
Jun
;
27
(
3
):
205
8
.
16.
Cifuentes
L
,
Cerda
J
.
[Clinical use of diagnostic tests (Part 2). Clinical application and usefulness of a diagnostic test]
.
Rev Chil Infectologia Organo Of Soc Chil Infectologia
.
2010
Aug
;
27
(
4
):
316
9
.
17.
Felgueiras
, et al
- 2018 - Mobile-based Risk Assessment of Diabetic Retinopat.pdf. [cited 2018 Sep 10]. Available from: http://www.scitepress.org/Papers/2018/65997/65997.pdf
18.
Cuadros J and Sim I. EyePACS: an open source clinical communication system for eye care. Stud Health Technol Inform. 2004;107(Pt 1):207–11.
19.
Bujang
MA
,
Adnan
TH
.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
.
J Clin Diagn Res
.
2016
Oct
;
10
(
10
):
YE01
06
.
[PubMed]
2249-782X
20.
Cuadros
JA
.
Telemedicine-based diabetic retinopathy screening programs: an evaluation of utility and cost-effectiveness.
Vol. 3, Smart Homecare Technology and TeleHealth. Dove Press;
2015
[cited 2020 Jun 26]. p. 119–27. Available from: https://www.dovepress.com/telemedicine-based-diabetic-retinopathy-screening-programs-an-evaluati-peer-reviewed-fulltext-article-SHTT
21.
Wu
L
,
Fernandez-Loaiza
P
,
Sauma
J
,
Hernandez-Bogantes
E
,
Masis
M
.
Classification of diabetic retinopathy and diabetic macular edema
.
World J Diabetes
.
2013
Dec
;
4
(
6
):
290
4
.
[PubMed]
1948-9358
22.
STARD
2015
guidelines for reporting diagnostic accuracy studies: explanation and elaboration | BMJ Open. [cited 2020 Jan 26]. Available from: https://bmjopen.bmj.com/content/6/11/e012799?ijkey=fa57deed43c56ee4b7bc0c79b30659f322ba2ac0&keytype2=tf_ipsecsha
23.
Scanlon
PH
.
The English National Screening Programme for diabetic retinopathy 2003-2016
.
Acta Diabetol
.
2017
Jun
;
54
(
6
):
515
25
.
[PubMed]
0940-5429
24.
Raman
R
,
Srinivasan
S
,
Virmani
S
,
Sivaprasad
S
,
Rao
C
,
Rajalakshmi
R
.
Fundus photograph-based deep learning algorithms in detecting diabetic retinopathy
.
Eye (Lond)
.
2019
Jan
;
33
(
1
):
97
109
.
[PubMed]
0950-222X
25.
Islam
MM
,
Yang
HC
,
Poly
TN
,
Jian
WS
,
Jack Li
YC
.
Deep learning algorithms for detection of diabetic retinopathy in retinal fundus photographs: A systematic review and meta-analysis
.
Comput Methods Programs Biomed
.
2020
Jul
;
191
:
105320
.
[PubMed]
0169-2607
26.
Bhaskaranand
M
,
Ramachandra
C
,
Bhat
S
,
Cuadros
J
,
Nittala
MG
,
Sadda
SR
, et al
.
The Value of Automated Diabetic Retinopathy Screening with the EyeArt System: A Study of More Than 100,000 Consecutive Encounters from People with Diabetes
.
Diabetes Technol Ther
.
2019
Nov
;
21
(
11
):
635
43
.
[PubMed]
1520-9156
27.
Abràmoff
MD
,
Lavin
PT
,
Birch
M
,
Shah
N
,
Folk
JC
.
Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices
.
NPJ Digit Med
.
2018
Aug
;
1
(
1
):
39
.
[PubMed]
2398-6352
28.
OCT
.
OCTA show promise in screening for DR.
Optometry Times. [cited 2020 Aug 28]. Available from: https://www.optometrytimes.com/view/oct-octa-show-promise-in-screening-for-dr
29.
Sandhu
HS
,
Eltanboly
A
,
Shalaby
A
,
Keynton
RS
,
Schaal
S
,
El-Baz
A
.
Automated Diagnosis and Grading of Diabetic Retinopathy Using Optical Coherence Tomography
.
Invest Ophthalmol Vis Sci
.
2018
Jun
;
59
(
7
):
3155
60
.
[PubMed]
0146-0404
30.
Li
X
,
Shen
L
,
Shen
M
,
Tan
F
,
Qiu
CS
.
Deep learning based early stage diabetic retinopathy detection using optical coherence tomography
.
Neurocomputing
.
2019
Dec
;
369
:
134
44
. 0925-2312
31.
Sandhu
HS
,
Eladawi
N
,
Elmogy
M
,
Keynton
R
,
Helmy
O
,
Schaal
S
, et al
.
Automated diabetic retinopathy detection using optical coherence tomography angiography: a pilot study
.
Br J Ophthalmol
.
2018
Nov
;
102
(
11
):
1564
9
.
[PubMed]
0007-1161
32.
Le
D
,
Alam
M
,
Yao
CK
,
Lim
JI
,
Hsieh
YT
,
Chan
RV
, et al
.
Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy
.
Transl Vis Sci Technol
.
2020
Jul
;
9
(
2
):
35
35
. Available from: https://tvst.arvojournals.org/article.aspx?articleid=2770240
[PubMed]
2164-2591
33.
Šimundić
AM
.
Measures of Diagnostic Accuracy: basic Definitions
.
EJIFCC
.
2009
Jan
;
19
(
4
):
203
11
.
[PubMed]
1650-3414
34.
Evaluation of Artificial Intelligence–Based Grading of Diabetic Retinopathy in Primary Care | Diabetic Retinopathy | JAMA Network Open | JAMA Network. [cited 2020 May 9]. Available from: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2703944
35.
Essentials of Family Medicine.pdf. [cited 2020 Jun 8]. Available from: https://simidchiev.net/lubokirov/Essentials_of_Family_Medicine_Sloane.pdf
36.
PubMed Central Full Text PDF
. [cited 2020 Jun 8]. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1495095/pdf/jgi_10750.pdf
37.
Parikh
R
,
Parikh
S
,
Arun
E
,
Thomas
R
.
Likelihood ratios: clinical application in day-to-day practice
.
Indian J Ophthalmol
.
2009
May-Jun
;
57
(
3
):
217
21
.
[PubMed]
0301-4738