Machine learning techniques have been used in different areas such as finance, advertisement, marketing, and medicine and also achieve satisfactory performance. In practice complex machine learning models such as Random Forest, Support Vector Machines, and Neural Networks usually achieve better performance than interpretable models such as Linear Regression, and Decision Tree. Generally speaking, the tradeoff between model performance and model complexity is the more accurate the model is, the less interpretable it is. Machine learning interpretability is defined as the degree to which a machine learning user can understand and interpret the prediction made by the developed models [1,2]. Machine learning evaluation metrics such as accuracy and area under the curve do not reflect many important aspects around the developed machine learning model such as fairness, privacy, and safety. Machine learning interpretability is very important for different reasons. First, explaining predictions is always useful for getting insights of how this model is working and helps in improving the model performance. Second, the General Data Protection Regulation (GDPR) forces industries to ‘explain’ any decision was taken by a machine when automated decision making takes place: “a right of explanation for all individuals to obtain meaningful explanations of the logic involved" .
There has been some work on interpretability for both local explanations about an individual’s prediction [4,5] and global explanations about the overall prediction model [6,7,8]. The aim of this project is first to develop efficient model agnostic techniques (techniques that can be applied on any machine learning model) that support the interpretability of the developed black box prediction models. Second, to develop some techniques for cluster interpretability which explain why a specific instance is included in a certain cluster.
- Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM; 2018. p. 582.
- Lim BY, Dey AK, Avrahami D. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM; 2009. p. 2119-2128.
- Goodman, Bryce, and Seth Flaxman. "European Union regulations on algorithmic decision making and a" right to explanation"." arXiv preprint arXiv:1606.08813 (2016)
- Ribeiro MT, Singh S, Guestrin C. Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM; 2016. p. 1135-1144
- Koh, Pang Wei, and Percy Liang. "Understanding black-box predictions via influence functions." arXiv preprint arXiv:1703.04730 (2017).
- Fisher A, Rudin C, Dominici F. Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the" Rashomon" Perspective. arXiv preprint arXiv:180101489. 2018;.
- Friedman JH. Greedy function approximation: a gradient boosting machine. Annals of statistics. 2001; p. 1189-1232.