Current registration evaluations typically compute target registration error using manually annotated datasets. As a result, the quality of landmark annotations is crucial for unbiased comparisons because registration algorithms are trained and tested using these landmarks. Even though some data providers claim to have mitigated the inter-observer variability by having multiple raters, quality control such as a third-party screening can still be reassuring for intended users. Examining the landmark quality for neurosurgical datasets (RESECT and BITE) poses specific challenges. In this study, we applied the variogram, which is a tool extensively used in geostatistics, to convert 3D landmark distributions into an intuitive 2D representation. This allowed us to identify potential problematic cases efficiently so that they could be examined by experienced radiologists. In both the RESECT and BITE datasets, we identified and confirmed a small number of landmarks with potential localization errors and found that, in some cases, the landmark distribution was not ideal for an unbiased assessment of non-rigid registration errors. Under discussion, we provide some constructive suggestions for improving the utility of publicly available annotated data.
CITATION STYLE
Luo, J., Ma, G., Haouchine, N., Xu, Z., Wang, Y., Kapur, T., … Frisken, S. (2022). On the Dataset Quality Control for Image Registration Evaluation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13436 LNCS, pp. 36–45). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16446-0_4
Mendeley helps you to discover research relevant for your work.