Multimodal imaging combining positron emission tomography (PET) and magnetic resonance imaging (MRI) provides complementary information about metabolism and anatomy. While the appearances of MRI and PET images are distinctive, there are fundamental inter-image dependencies relating structure and function. In PET-MRI imaging, typical PET reconstruction methods use priors to enforce PET-MRI dependencies at the very fine scale of image gradients and, so, cannot capture larger-scale inter-image correlations and intra-image texture patterns. Some recent methods enforce statistical models of MRI-image patches on PET-image patches, risking infusing anatomical features into PET images. In contrast, we propose a novel patch-based joint dictionary model for PET and MRI, learning regularity in individual patches and correlations in spatially-corresponding patches, for Bayesian PET reconstruction using expectation maximization. Reconstructions on simulated and in vivo PET-MRI data show that our method gives better-regularized images with smaller errors, compared to the state of the art.
CITATION STYLE
Sudarshan, V. P., Chen, Z., & Awate, S. P. (2018). Joint PET+MRI patch-based dictionary for bayesian random field PET reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11070 LNCS, pp. 338–346). Springer Verlag. https://doi.org/10.1007/978-3-030-00928-1_39
Mendeley helps you to discover research relevant for your work.