The rapid growth and spread of radiographic equipment in medical centres have resulted in a corresponding increase in the number of medical X-ray images produced. Therefore, more efficient and effective image classification techniques are required. Three different techniques for automatic classification of medical X-ray images were compared. A bag-of-visual-words model and a Convolutional Neural Network (CNN) were used to extract features from the images. The two groups of extracted feature vectors were each used to train a linear support vector machine classifier. Third, a fine-tuned CNN was used for end-to-end classification. A pre-trained CNN was used to overcome dataset limitations. The three techniques were evaluated on the ImageCLEF 2007 medical database. The database provides medical X-ray images in 116 categories. The experimental results showed that fine-tuned CNN outperforms the other two techniques by achieving per class classification accuracy above 80% in 60 classes compared to 24 and 26 classes for bag-of-visual-words and CNN extracted features respectively. However, certain classes remain difficult to classify accurately such as classes in the same sub-body region due to inter-class similarity.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Zare, M. R., Alebiosu, D. O., & Lee, S. L. (2018). Comparison of Handcrafted Features and Deep Learning in Classification of Medical X-ray Images. In Proceedings - 2018 4th International Conference on Information Retrieval and Knowledge Management: Diving into Data Sciences, CAMP 2018 (pp. 73–78). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/INFRKM.2018.8464688