Abstract
Explainable Artificial Intelligence (XAI) plays a pivotal role in facilitating users’ understanding of AI model predictions. Among the various branches of XAI, the decision tree surrogate model has gained considerable popularity due to its ability to approximate predictions of black-box models while maintaining interpretability through its tree structure. Despite the abundance of proposed XAI methods, evaluating the interpretability of these methods remains challenging. Traditional subjective evaluation methods heavily rely on users’ domain knowledge, leading to potential biases and costly processes. Additionally, objective evaluation methods only address limited aspects of XAI interpretability and are rarely tailored for decision tree surrogate models. This paper proposes a comprehensive framework for evaluating the interpretability of decision tree surrogate model-based XAIs. The framework encompasses six quantitative properties, namely complexity, clarity, stability, consistency, sufficiency, and causality, and provides calculation methods for each indicator. Furthermore, we conduct extensive experiments on several classical decision tree-based XAIs using the proposed framework. Lastly, we demonstrate the practical application of the framework through a case study on methods aimed at enhancing interpretability. Our objective is to provide practitioners with valuable guidance in selecting appropriate XAI methods for their specific use cases and to assist developers in improving the performance of their XAI systems.