Why Fuzzy Techniques in Explainable AI? Which Fuzzy Techniques in Explainable AI?

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of big challenges of many state-of-the-art AI techniques such as deep learning is that their results do not come with any explanations – and, taking into account that some of the resulting conclusions and recommendations are far from optimal, it is difficult to distinguish good advice from bad one. It is therefore desirable to come up with explainable AI. In this paper, we argue that fuzzy techniques are a proper way to this explainability, and we also analyze which fuzzy techniques are most appropriate for this purpose. Interestingly, it turns out that the answer depends on what problem we are solving: e.g., different “and”- and “or”-operations are preferable when we are controlling a single object and when we are controlling a group of objects.

Cite

CITATION STYLE

APA

Cohen, K., Bokati, L., Ceberio, M., Kosheleva, O., & Kreinovich, V. (2022). Why Fuzzy Techniques in Explainable AI? Which Fuzzy Techniques in Explainable AI? In Lecture Notes in Networks and Systems (Vol. 258, pp. 74–78). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-82099-2_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free