HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,84 MB, PDF-dokument

  • Gašper Podobnik
  • Ibragimov, Bulat
  • Elias Tappeiner
  • Chanwoong Lee
  • Jin Sung Kim
  • Zacharia Mesbah
  • Romain Modzelewski
  • Yihao Ma
  • Fan Yang
  • Mikołaj Rudecki
  • Marek Wodziński
  • Primož Peterlin
  • Primož Strojan
  • Tomaž Vrtovec

Background and purpose: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge. Materials and methods: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test. Results: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities. Conclusion: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.

OriginalsprogEngelsk
Artikelnummer110410
TidsskriftRadiotherapy and Oncology
Vol/bind198
ISSN0167-8140
DOI
StatusUdgivet - sep. 2024

Bibliografisk note

Publisher Copyright:
© 2024 The Authors

ID: 399170664