MSC/PHD INTAKE 

 

Electrical & Electronic Engineering at UNIVERSITI TEKNOLOGI PETRONAS

 

 SEND YOUR RESUME NOW!

Supervisor: Prof Fabrice MERIAUDEAU

Email : This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Research Topic:

Deep Learning Techniques for Automatic Detection of DME or DR on OCT Images

 

 Eye diseases such as Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) are the most common causes of irreversible vision loss in individuals with diabetes. Just in United States alone, health care and associated costs related to eye diseases are estimated at almost $500 M [1] with prevalent cases of DR expected to grow exponentially affecting over 300 M people worldwide by 2025 [2]. Early detection and treatment of DR and DME play a major role to prevent adverse effects such as blindness. DME is characterized as an increase in retinal thickness within one disk diameter of the fovea center with or without hard exudates and sometimes associated with cysts. Spectral Domain OCT (SD-OCT), which images the depth of the retina with a high resolution and fast image acquisition is an adequate tool, compared to fundus images for DME identification. However, very few works, up to our knowledge, have addressed the specific problem of DME and its associated features detection from OCT images. 

Srinivasan et al. [3] proposed a classification method to distinguish normal DCT volumes from DME and Age-related Macular Degeneration (AMD) olumes. The OCT images are pre-processed by reducing the speckle noise by enhancing the sparsity in a transform-domain and flattening the retinal curvature to reduce the inter-patient variations. Then, Histograms of Oriented Gradients (HOG) are extracted for each slice of a volume and fed to a linear Support Vector Machines (SVM). This method was applied onto a dataset of 45 patients equally subdivided into the three aforementioned classes and led to a correct classification rate of 100%, 100% and 86.67% for normal, DME and AMD patients, respectively. The images that were used in their paper, are publically available but are already preprocessed (noise removed), do not offer a huge variability in term of DME lesions, have different sizes for the OCT volumes, and some of them (without specifying which) have been excluded for the training phase; all these reasons prevent us from using this dataset to benchmark our work. 

Venhuizen et al. [4] recently proposed a method for OCT images classification using the BoW models [5]. The method starts with the detection and selection of keypoints in each individual B- scan, by keeping the most salient points corresponding to the top 3% of the vertical gradient values. Then, a size 9*9 pixels texton is extracted around each keypoint, and Principal Component Analysis (PCA) is applied to reduce the dimension of every texton from 81 to 9. All extracted feature vectors are used to create a dictionnary using k-means clustering. Then, each OCT volume is represented as an histogram that captures the codebword occurrences. These histograms are used as feature vector to train a Random Forest (RF) classifier with a maximum of 100 trees. The method was used to classify OCT volumes between AMD and normal cases and achieved an Area Under the Curve (AUC) of 0.984 with a dataset of 384 OCT volumes. 

Liu et al. proposed a methodology for detecting macular pathology in OCT images using LBP and gradient information as attributes [6]. The method starts by aligning and flattening the images and creating a 3-level multi-scale spatial pyramid. The edge and LBP histograms are then extracted from each 80 block of every level of the pyramid. All the obtained histograms are concatenated into a global descriptor whose dimensions are reduced using PCA. Finally a SVM with an Radial Basis Function (RBF) kernel is used as classifier. The method achieved good results in detection OCT scan containing different pathology such as DME or AMD, with an AUC of 0.93 using a dataset of 326 OCT scans. 

Lemaître et al. [7], developed a classification framework based on Local Binary Patterns (LBP) features [8] to describe the texture of OCT images and dictionary learning using the Bag-of-Words (BoW) models [5]. They proposed to extract 2D and 3D LBP features from OCT images and volumes, respectively. The LBP descriptors are either extracted from the entire sample or local patches within individual samples. Numerous experiments were conducted and the authors achieved a sensitivity and specificity of 81.2% and 93.2% for their best configuration. 

On the same dataset, Sankar et al [14] recently proposed a different approach, which consists in addressing this issue as an anomaly detection problem. In their method, the authors propose a technique that not only allow the classification of the OCT volume, but also enables the identification of the individual diseased B-scans inside the volume. This approach is based on modeling the appearance of normal OCT images with a Gaussian Mixture Model (GMM) and detecting abnormal OCT images as outliers. The classification of an OCT volume is based on the number of detected outliers. Experimental results with two different datasets show that the proposed method achieves sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. The proposed method achieves better classification performance than other recently published work but required to tune the GMM parameters which effect should be tested on a larger database. 

All the previous work required “hand-crafted” features; in this proposal we would like to investigate the usage of recent deep learning techniques [10] (deep neural networks, convolutional deep neural networks, deep belief networks and recurrent neural networks) and evaluate their benefits compared to recent state of arts results. This research will be done in collaboration with research facilities worldwide providing data from different OCT machines and related to different patient ethnicities. These collaborators are located in Malaysia, France, Singapore and Hong Kong.

 

References: 

[1] S. Sharma, A. Oliver-Hernandez, W. Liu, J. Walt, The impact of diabetic retinopathy on health-related quality of life, Curr. Op. Ophtal. 16 (2005). 

[2] S. Wild, G. Roglic, A. Green, R. Sicree, H. King, Global prevalence of diabetes estimates for the year 2000 and projections for 2030, Diabetes Care 27 (5) (2004) 1047{1053}.


[3] P. P. Srinivasan, L. A. Kim, P. S. Mettu, S.W. Cousins, G. M. Comer, J. A. Izatt, S. Farsiu, Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images, Biomedical Optical Express 5 (10) (2014) 3568-3577. 

[4] F. G. Venhuizen, B. van Ginneken, B. Bloemen, M. J. P. P. van Grisven, R. Philipsen, C. Hoyng, T. Theelen, C. I. Sanchez, Automated age-related macular degeneration classification in OCT using unsupervised feature learning, in: SPIE Medical Imaging, Vol. 9414, (2015), p. 94141l. 

[5] J. Sivic, A. Zisserman, Video google: a text retrieval approach to object matching in videos, in: IEEE ICCV,( 2003), pp. 1470-1477. 

[6] S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, S. Farsiu, Automatic segmentation of seven retinal layers in sd-oct images congruent with expert manual segmentation, Optic Express 18 (18) (2010) 19413-19428. 

[7] G. Lemaître, M. Rastgoo, J. Massich, S. Sankar, F. Meriaudeau, D. Sidibe,Classification of SD-OCT volumes with LBP: Application to dme detection, in: Medical Image Computing and Computer-Assisted Intervention (MICCAI), Ophthalmic Medical Image Analysis Workshop (OMIA), 2015. 

[8] T. Ojala, M. Pietiainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, Pattern Analysis and Machine Intelligence, IEEE Transactions on 24 (7) (2002) 971-987. 

[9] T. C. Chen, B. Cense, M. C. Pierce, N. Nassif, B. H. Park, S. H. Yun, B. R. White, B. E. Bouma, G. J. Tearney, J. F. de Boer, Spectral domain optical coherence tomography: ultra-high speed, ultra-high resolution ophthalmic imaging, Arch. Ophtalmol. 123 (12) (2005) 1715-1720. 

[10] https://en.wikipedia.org/wiki/Deep_learning .

[11] K. Dabov, A. Foi, V. Katkovnik and K. Egiazarian, Image denoising by Sparse 3D Transform Domain Collaborative Filtering, IEEE Trans. Image Process., 16(8), (2007) 2080-2095,. 

[12] N. Dalal and B. Triggs, Histograms of oriented gradients for hulan detection, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition,( 200)5, 886-893. 

[13] G. Zhao, T. Ahonen, J. Matas, M. Pietiainen, Rotation-invariant image and video description with local binary pattern features, Image Processing, IEEE Transactions on 21 (4) (2012) 1465-1477. 

[14] S Sankar, D Sidibé, Y Cheung, TY Wong, E Lamoureux, D Milea, and F. Meriaudeau, “Classification of SD-OCT volumes for DME detection: an anomaly detection approach”, SPIE Medical Imaging, 97852O-97852O-6, 2016.

 

Requirements

Malaysian citizenship

Qualification in  Electrical & Electronic Engineering /Computer Engineering /  Biomedical Engineering / Applied Math and equivalent from a recognized institution.

CGPA Bachelor >3.00, MSc > 3.50 equivalent

Max Age PhD – 35 year

 

Incentives

UTP GLA & MyBrain

 

CONTACT US

Application instructions

If you interested please send your resume to This email address is being protected from spambots. You need JavaScript enabled to view it.   

Nadira Nordin
Centre for Intelligent Signal & Imaging Research (CISIR)
Block 22, Universiti Teknologi PETRONAS, Bandar Seri Iskandar,
31750 Tronoh, Perak, Malaysia.

Email : This email address is being protected from spambots. You need JavaScript enabled to view it.

Tel      : +605 368 7888  |   Website: http://cisir.utp.edu.my/

 

 

 

JSN Nuru template designed by JoomlaShine.com