Evidence Fusion with Contextual Discounting for Multi-modality Medical Image Segmentation

15Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As information sources are usually imperfect, it is necessary to take into account their reliability in multi-source information fusion tasks. In this paper, we propose a new deep framework allowing us to merge multi-MR image segmentation results using the formalism of Dempster-Shafer theory while taking into account the reliability of different modalities relative to different classes. The framework is composed of an encoder-decoder feature extraction module, an evidential segmentation module that computes a belief function at each voxel for each modality, and a multi-modality evidence fusion module, which assigns a vector of discount rates to each modality evidence and combines the discounted evidence using Dempster’s rule. The whole framework is trained by minimizing a new loss function based on a discounted Dice index to increase segmentation accuracy and reliability. The method was evaluated on the BraTs 2021 database of 1251 patients with brain tumors. Quantitative and qualitative results show that our method outperforms the state of the art, and implements an effective new idea for merging multi-information within deep neural networks.

Cite

CITATION STYLE

APA

Huang, L., Denoeux, T., Vera, P., & Ruan, S. (2022). Evidence Fusion with Contextual Discounting for Multi-modality Medical Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13435 LNCS, pp. 401–411). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16443-9_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free