Although complex machine learning models (e.g., Random Forest, Neural Networks) are commonly outperforming the traditional and simple interpretable models (e.g., Linear Regression, Decision Tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. Hence, interpretability techniques for machine learning models are an area focus of this project. In general, the main aim of these interpretability techniques is to shed light and provide insights into the prediction process of the machine learning models and to be able to explain how the results from the prediction was generated. The project focuses on the following:
- Proposing fundamental quantitative measures for assessing the quality of interpretability techniques . In addition, we present a comprehensive experimental evaluation of state-of-the-art local model agnostic interpretability techniques.
- Proposing a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concept rather than raw features. Such framework explains the prediction of an instance using concepts that align with the clinician's knowledge about what a concept means and facilitates explaining the prediction of an instance through an interpretable model that includes concepts that are deemed important to the black-box model in predicting the decision of the instance.
Project Publications:
- R.El Shawi, & M. Al‐Mallah. Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality. Journal of Artificial Intelligence Research, 2022;75: 833-855. link
- R.El Shawi, K. Kilanava, and S. Sakr. "An interpretable semi-supervised framework for patch-based classification of breast cancer." Scientific Reports 12, no. 1 (2022): 1-15. link
- R. ElShawi, Y. Sherif, M. Al‐Mallah, S. Sakr. Interpretability in healthcare: A comparative study of local machine learning interpretability techniques. Computational Intelligence. 2021 Nov;37(4):1633-50. link
- R. Elshawi, MH Al-Mallah, S. Sakr. On the interpretability of machine learning-based model for predicting hypertension. BMC medical informatics and decision making. 2019 Dec;19(1):1-32. link
- R. ElShawi, Y Sherif, M. Al-Mallah, S. Sakr. ILIME: Local and Global Interpretable Model-Agnostic Explainer of Black-Box Decision. InEuropean Conference on Advances in Databases and Information Systems 2019 Sep 8 (pp. 53-68). Springer, Cham. link
Contact Information
Radwa El Shawi
Radwa [dot] elshawi [at] ut [dot] ee