From What to Why, the Growing Need for a Focus Shift Toward Explainability of AI in Digital Pathology

10Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

While it is impossible to deny the performance gains achieved through the incorporation of deep learning (DL) and other artificial intelligence (AI)-based techniques in pathology, minimal work has been done to answer the crucial question of why these algorithms predict what they predict. Tracing back classification decisions to specific input features allows for the quick identification of model bias as well as providing additional information toward understanding underlying biological mechanisms. In digital pathology, increasing the explainability of AI models would have the largest and most immediate impact for the image classification task. In this review, we detail some considerations that should be made in order to develop models with a focus on explainability.

Cite

CITATION STYLE

APA

Border, S. P., & Sarder, P. (2022, January 11). From What to Why, the Growing Need for a Focus Shift Toward Explainability of AI in Digital Pathology. Frontiers in Physiology. Frontiers Media S.A. https://doi.org/10.3389/fphys.2021.821217

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free