Despite the growing use of machine learning-based prediction models in the medical domains [1,2], clinicians still do not trust using these models in practice for many reasons. One important reason is that, most of the developed models focused on the predictive performance (accuracy, Area under the curve) but rarely explain the prediction in an understandable form for users. Thus most of the currently available predictive systems depend on the knowledge of the domain experts [3, 4].
Generally speaking, machine learning interpretability can be classified into Model-Specific techniques or Model-Agnostic techniques. Model-specific interpretability techniques only fit specific model. One the other hand, Model-Agnostic interpretability techniques are more general, can be applied on any machine learning model, and are usually called post-hoc models. Post-Hoc models such as LIME works by locally fitting a simpler model around the instance to be explained and creates explanation that is locally faithful [5]. Other post-Hoc techniques works by perturbing the point to be explained to see how the prediction changes [6, 7]. Quite few papers address interpretability of prediction models on medical images datasets [8]. The main goal of this project is to develop post hoc interpretable models for the automatic extracted machine learning features in medical images in a way that mimic how an expert extracts relevant data from medical images.
References:
- Kononenko I. Machine learning for medical diagnosis: history, state of the art and perspective. Artificial Intelligence in medicine. 2001;23(1):89 -109.
- Deo RC. Machine learning in medicine. Circulation. 2015;132(20):1920 - 1930.
- Singal AG, Rahimi RS, Clark C, Ma Y, Cuthbert JA, Rockey DC, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clinical Gastroenterology and Hepatology.2013;11(10):1335-1341.
- He D, Mathews SC, Kalloo AN, Hutfless S. Mining high-dimensional administrative claims data to predict early hospital readmissions. Journal of the American Medical Informatics Association. 2014;21(2):272-279
- Ribeiro MT, Singh S, Guestrin C. Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM; 2016. p. 1135-1144
- Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
- Li, J., Monroe, W., and Jurafsky, D. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016b
- Pereira, S., Meier, R., McKinley, R., Wiest, R., Alves, V., Silva, C.A. and Reyes, M., 201 Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation. Medical image analysis, 44, pp.228-244.