TY - GEN
T1 - Automated Diabetic Retinopathy Grading using Resnet
AU - Elswah, Doaa K.
AU - Elnakib, Ahmed A.
AU - El-Din Moustafa, Hossam
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/9/8
Y1 - 2020/9/8
N2 - This paper presents a deep learning framework for the classification of diabetic retinopathy (DR) grades from fundus images. The proposed framework is composed of three stages. First, the fundus image is preprocessed using intensity normalization and augmentation. Second, the pre-processed image is input to a ResNet Convolutional Neural Network (CNN) model in order to extract a compact feature vector for grading. Finally, a classification step is used to detect DR and determine its grade (e.g., mild, moderate, severe, or Proliferative Diabetic Retinopathy (PDR)). The proposed framework is trained using the challenging ISBI'2018 Indian Diabetic Retinopathy Image Dataset (IDRiD). To remove the training bias, the data is balanced to ensure that each DR grade is represented with the same number of images during the training process. The proposed system shows an improved performance with respect to the related techniques using the same data, evidenced by the highest overall classification accuracy of 86.67%.
AB - This paper presents a deep learning framework for the classification of diabetic retinopathy (DR) grades from fundus images. The proposed framework is composed of three stages. First, the fundus image is preprocessed using intensity normalization and augmentation. Second, the pre-processed image is input to a ResNet Convolutional Neural Network (CNN) model in order to extract a compact feature vector for grading. Finally, a classification step is used to detect DR and determine its grade (e.g., mild, moderate, severe, or Proliferative Diabetic Retinopathy (PDR)). The proposed framework is trained using the challenging ISBI'2018 Indian Diabetic Retinopathy Image Dataset (IDRiD). To remove the training bias, the data is balanced to ensure that each DR grade is represented with the same number of images during the training process. The proposed system shows an improved performance with respect to the related techniques using the same data, evidenced by the highest overall classification accuracy of 86.67%.
UR - http://www.scopus.com/inward/record.url?scp=85096210420&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85096210420&partnerID=8YFLogxK
U2 - 10.1109/NRSC49500.2020.9235098
DO - 10.1109/NRSC49500.2020.9235098
M3 - Conference contribution
AN - SCOPUS:85096210420
T3 - National Radio Science Conference, NRSC, Proceedings
SP - 248
EP - 254
BT - Proceedings of 2020 37th National Radio Science Conference, NRSC 2020
A2 - Sadek, Rowayda
A2 - Ashour, Mohamed
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 37th National Radio Science Conference, NRSC 2020
Y2 - 8 September 2020 through 10 September 2020
ER -