Semantic similarity metrics for image registration

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 4,14 MB, PDF-dokument

Image registration aims to find geometric transformations that align images. Most algorithmic and deep learning-based methods solve the registration problem by minimizing a loss function, consisting of a similarity metric comparing the aligned images, and a regularization term ensuring smoothness of the transformation. Existing similarity metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning pixel intensity values or correlations, giving difficulties with low intensity contrast, noise, and ambiguous matching. We propose a semantic similarity metric for image registration, focusing on aligning image areas based on semantic correspondence instead. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach extracting features with an auto-encoder, and a semi-supervised approach using supplemental segmentation data. We validate the semantic similarity metric using both deep-learning-based and algorithmic image registration methods. Compared to existing methods across four different image modalities and applications, the method achieves consistently high registration accuracy and smooth transformation fields.

OriginalsprogEngelsk
Artikelnummer102830
TidsskriftMedical Image Analysis
Vol/bind87
Antal sider16
ISSN1361-8415
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
We thank Matthew Quay, the Cell Tracking Challenge, and the Cancer Imaging Archive for the provision of the datasets, and Huaqi Qui for providing the implementation of the metric. This work was funded in part by the Lundbeck Foundation, Denmark (grant no. R218-2016-883 ), in part by the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (grant no. 0062606 ), and in part through the Danish National Research Foundation through the Danish Pioneer Centre for AI (grant no. P1 ).

Publisher Copyright:
© 2023 The Author(s)

ID: 347484754