Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Explainable Artificial Intelligence (XAI) aims at introducing transparency and intelligibility into the decision-making process of AI systems. In recent years, most efforts were made to build XAI algorithms that are able to explain black-box models. However, in many cases, including medical and industrial applications, the explanation of a decision may be worth equally or even more than the decision itself. This imposes a question about the quality of explanations. In this work, we aim at investigating how the explanations derived from black-box models combined with XAI algorithms differ from those obtained from inherently interpretable glass-box models. We also aim at answering the question whether there are justified cases to use less accurate glass-box models instead of complex black-box approaches. We perform our study on publicly available datasets.

Cite

CITATION STYLE

APA

Kuk, M., Bobek, S., & Nalepa, G. J. (2022). Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13352 LNCS, pp. 668–675). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-08757-8_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free