Explainable Deep Learning approach for Shoulder Abnormality Detection in X-Rays Dataset

##plugins.themes.academic_pro.article.main##

pawan mall
Pradeep Singh

Abstract

Computer vision researchers and decision-makers have struggled to understand how deep neural networks (DNNs) accomplish image classification tasks and interpret their results. Due to a lack of understanding of their internal functioning, these models are commonly referred to as "black boxes." As a part of the development process, the DNNs can be easily explained. In this research work, we introduce an explainable technique for shoulder abnormality detection. The motivation behind this study is to enhance patients' and medical professionals' trust in DNNs technology. DNNs are implemented frequently in the medical domain. The suggested abnormality detector, which is based on IGrad-CAM++, is capable of detecting Shoulder X-rays abnormality. Grad-CAM is a common approach that combines the activation maps received from the model to create such a visualization. The average gradient-based terms used in this technique, on the other hand, understate the contribution of the model's identified representations to its predictions. In order to address this issue, we offer a technique that uses Grad-CAM++ to compute the route integral of the gradient-based terms. By assessing different techniques, it is discovered that the recommended procedure can perform very effectively and efficiently in X-ray images provides more visual explanation than existing techniques.

##plugins.themes.academic_pro.article.details##

How to Cite
mall, pawan, & Singh, P. (2022). Explainable Deep Learning approach for Shoulder Abnormality Detection in X-Rays Dataset. International Journal of Next-Generation Computing, 13(3). https://doi.org/10.47164/ijngc.v13i3.611

References

  1. Antol, Stanislaw et al. 2015. “Vqa: Visual Question Answering.” In Proceedings of the IEEE International Conference on Computer Vision, , 2425–33. DOI: https://doi.org/10.1109/ICCV.2015.279
  2. Arya, Vijay et al. 2019. “One Explanation Does Not Fit All: A Toolkit and Taxonomy of Ai Explainability Techniques.” arXiv preprint arXiv:1909.03012.
  3. Benjamini, Yoav. 2010. “Discovering the False Discovery Rate.” Journal of the Royal Statistical Society: series B (statistical methodology) 72(4): 405–16. DOI: https://doi.org/10.1111/j.1467-9868.2010.00746.x
  4. Chattopadhay, Aditya, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. “Grad-Cam++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks.” In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), , 839–47. DOI: https://doi.org/10.1109/WACV.2018.00097
  5. Chicco, Davide, and Giuseppe Jurman. 2020. “The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation.” BMC genomics 21(1): 1–13. DOI: https://doi.org/10.1186/s12864-019-6413-7
  6. Couteaux, Vincent, Olivier Nempont, Guillaume Pizaine, and Isabelle Bloch. 2019. “Towards Interpretability of Segmentation Networks by Analyzing DeepDreams.” In Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, Springer, 56–63. DOI: https://doi.org/10.1007/978-3-030-33850-3_7
  7. Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding.” arXiv preprint arXiv:1810.04805.
  8. Geis, J Raymond et al. 2019. “Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement.” Canadian Association of Radiologists Journal 70(4): 329–34. DOI: https://doi.org/10.1016/j.carj.2019.08.010
  9. Hatt, Mathieu, Chintan Parmar, Jinyi Qi, and Issam El Naqa. 2019. “Machine (Deep) Learning Methods for Image Processing and Radiomics.” IEEE Transactions on Radiation and Plasma Medical Sciences 3(2): 104–8. DOI: https://doi.org/10.1109/TRPMS.2019.2899538
  10. He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-Decem: 770–78. DOI: https://doi.org/10.1109/CVPR.2016.90
  11. Hinton, Geoffrey E, Simon Osindero, and Yee-Whye Teh. 2006. “A Fast Learning Algorithm for Deep Belief Nets.” Neural computation 18(7): 1527–54. DOI: https://doi.org/10.1162/neco.2006.18.7.1527
  12. Kipf, Thomas N, and Max Welling. 2016. “Semi-Supervised Classification with Graph Convolutional Networks.” arXiv preprint arXiv:1609.02907.
  13. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. 2012. “Imagenet Classification with Deep Convolutional Neural Networks.” Advances in neural information processing systems 25: 1097–1105.
  14. Lévy, Daniel, and Arzav Jain. 2016. “Breast Mass Classification from Mammograms Using Deep Convolutional Neural Networks.” arXiv preprint arXiv:1612.00542.
  15. Mahendran, Aravindh, and Andrea Vedaldi. 2016. “Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images.” International Journal of Computer Vision 120(3): 233–55. DOI: https://doi.org/10.1007/s11263-016-0911-8
  16. Mall, Pawan Kumar, Pradeep Kumar Singh, and Divakar Yadav. 2019. “GLCM Based Feature Extraction and Medical X-RAY Image Classification Using Machine Learning Techniques.” In 2019 IEEE Conference on Information and Communication Technology, , 1–6. DOI: https://doi.org/10.1109/CICT48419.2019.9066263
  17. Van Molle, Pieter et al. 2018. “Visualizing Convolutional Neural Networks to Improve Decision Support for Skin Lesion Classification.” In Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Springer, 115–23. DOI: https://doi.org/10.1007/978-3-030-02628-8_13
  18. Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. 2015. “Inceptionism: Going Deeper into Neural Networks.”
  19. Nie, Dong, Yaozong Gao, Li Wang, and Dinggang Shen. 2018. “ASDNet: Attention Based Semi-Supervised Deep Networks for Medical Image Segmentation.” In International Conference on Medical Image Computing and Computer-Assisted Intervention, , 370–78. DOI: https://doi.org/10.1007/978-3-030-00937-3_43
  20. Noack, Adam, Isaac Ahern, Dejing Dou, and Boyang Li. 2021. “An Empirical Study on the Relation Between Network Interpretability and Adversarial Robustness.” SN Computer Science 2(1): 1–13. DOI: https://doi.org/10.1007/s42979-020-00390-x
  21. Oh, Junhyuk et al. 2015. “Action-Conditional Video Prediction Using Deep Networks in Atari Games.” arXiv preprint arXiv:1507.08750.
  22. Omeiza, Daniel, Skyler Speakman, Celia Cintas, and Komminist Weldermariam. 2019. “Smooth Grad-Cam++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models.” arXiv preprint arXiv:1908.01224.
  23. Papanastasopoulos, Zachary et al. 2020. “Explainable AI for Medical Imaging: Deep-Learning CNN Ensemble for Classification of Estrogen Receptor Status from Breast MRI.” In Medical Imaging 2020: Computer-Aided Diagnosis, , 113140Z. DOI: https://doi.org/10.1117/12.2549298
  24. Pereira, Sérgio et al. 2018. “Automatic Brain Tumor Grading from MRI Data Using Convolutional Neural Networks and Quality Assessment.” In Understanding and Interpreting Machine Learning in Medical Image Computing Applications, Springer, 106–14. DOI: https://doi.org/10.1007/978-3-030-02628-8_12
  25. Sayres, Rory et al. 2019. “Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy.” Ophthalmology 126(4): 552–64. DOI: https://doi.org/10.1016/j.ophtha.2018.11.016
  26. Selvaraju, Ramprasaath R et al. 2017. “Grad-Cam: Visual Explanations from Deep Networks via Gradient-Based Localization.” In Proceedings of the IEEE International Conference on Computer Vision, , 618–26. DOI: https://doi.org/10.1109/ICCV.2017.74
  27. Simonyan, Karen, Andrea Vedaldi, and Andrew Zisserman. 2013. “Deep inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.” arXiv preprint arXiv:1312.6034.
  28. Smilkov, Daniel et al. 2017. “Smoothgrad: Removing Noise by Adding Noise.” arXiv preprint arXiv:1706.03825.
  29. Stiglic, Gregor et al. 2020. “Interpretability of Machine Learning-Based Prediction Models in Healthcare.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10(5): e1379. DOI: https://doi.org/10.1002/widm.1379
  30. Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. 2017. “Axiomatic Attribution for Deep Networks.” In International Conference on Machine Learning, , 3319–28.
  31. Szegedy, Christian et al. 2015. “Going Deeper with Convolutions.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 07-12-June: 1–9. DOI: https://doi.org/10.1109/CVPR.2015.7298594
  32. Torres-Velázquez, Maribel, Wei-Jie Chen, Xue Li, and Alan B McMillan. 2020. “Application and Construction of Deep Learning Networks in Medical Imaging.” IEEE transactions on radiation and plasma medical sciences.
  33. Vinyals, Oriol, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. “Show and Tell: A Neural Image Caption Generator.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, , 3156–64. DOI: https://doi.org/10.1109/CVPR.2015.7298935
  34. Wang, Ge. 2016. “A Perspective on Deep Imaging.” Ieee Access 4: 8914–24. DOI: https://doi.org/10.1109/ACCESS.2016.2624938
  35. Wang, Haofan, Zifan Wang, et al. 2020. “Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, , 24–25. DOI: https://doi.org/10.1109/CVPRW50498.2020.00020
  36. Wang, Haofan, Rakshit Naidu, Joy Michael, and Soumya Snigdha Kundu. 2020. “SS-CAM: Smoothed Score-CAM for Sharper Visual Feature Localization.” arXiv preprint arXiv:2006.14255.
  37. Wang, Linda, Zhong Qiu Lin, and Alexander Wong. 2020. “Covid-Net: A Tailored Deep Convolutional Neural Network Design for Detection of Covid-19 Cases from Chest x-Ray Images.” Scientific Reports 10(1): 1–12. DOI: https://doi.org/10.1038/s41598-020-76550-z
  38. Ying, Rex et al. 2019. “Gnnexplainer: Generating Explanations for Graph Neural Networks.” Advances in neural information processing systems 32: 9240.
  39. Zeiler, Matthew D, and Rob Fergus. 2014. “Visualizing and Understanding Convolutional Networks.” In European Conference on Computer Vision, , 818–33. DOI: https://doi.org/10.1007/978-3-319-10590-1_53
  40. Zhou, Bolei et al. 2016. “Learning Deep Features for Discriminative Localization.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, , 2921–29. DOI: https://doi.org/10.1109/CVPR.2016.319
  41. Zhou, Keyang, and Bernhard Kainz. 2018. “Efficient Image Evidence Analysis of Cnn Classification Results.” arXiv preprint arXiv:1801.01693.
  42. Zintgraf, Luisa M, Taco S Cohen, Tameem Adel, and Max Welling. 2017. “Visualizing Deep Neural Network Decisions: Prediction Difference Analysis.” arXiv preprint arXiv:1702.04595.
  43. Zolna, Konrad, Devansh Arpit, Dendi Suhubdy, and Yoshua Bengio. 2017. “Fraternal Dropout.” arXiv preprint arXiv:1711.00066.