A teleophthalmology support system based on the visibility of retinal elements using the CNNs

Gustavo Calderon-Auza, Cesar Carrillo-Gomez, Mariko Nakano, Karina Toscano-Medina, Hector Perez-Meana, Ana Gonzalez H. Leon, Hugo Quiroz-Mercado

Research output: Contribution to journalArticlepeer-review

Abstract

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This paper proposes a teleophthalmology support system in which we use algorithms of object detection and semantic segmentation, such as faster region-based CNN (FR-CNN) and SegNet, based on several CNN architectures such as: Vgg16, MobileNet, AlexNet, etc. These are used to segment and analyze the principal anatomical elements, such as optic disc (OD), region of interest (ROI) composed by the macular region, real retinal region, and vessels. Unlike the conventional retinal image quality assessment system, the proposed system provides some possible reasons about the low-quality image to support the operator of an ophthalmoscope and patient to acquire and transmit a better-quality image to central eye hospital for its diagnosis. The proposed system consists of four steps: OD detection, OD quality analysis, obstruction detection of the region of interest (ROI), and vessel segmentation. For the OD detection, artefacts and vessel segmentation, the FR-CNN and SegNet are used, while for the OD quality analysis, we use transfer learning. The proposed system provides accuracies of 0.93 for the OD detection, 0.86 for OD image quality, 1.0 for artefact detection, and 0.98 for vessel segmentation. As the global performance metric, the kappa-based agreement score between ophthalmologist and the proposed system is calculated, which is higher than the score between ophthalmologist and general practitioner.
Original languageAmerican English
JournalSensors (Switzerland)
DOIs
StatePublished - 2 May 2020

Fingerprint

Dive into the research topics of 'A teleophthalmology support system based on the visibility of retinal elements using the CNNs'. Together they form a unique fingerprint.

Cite this