Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation. / Yang, Chen; Guo, Xiaoqing; Zhu, Meilu; Ibragimov, Bulat; Yuan, Yixuan.

I: IEEE Journal of Biomedical and Health Informatics, Bind 25, Nr. 10, 2021, s. 3886-3897.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Yang, C, Guo, X, Zhu, M, Ibragimov, B & Yuan, Y 2021, 'Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation', IEEE Journal of Biomedical and Health Informatics, bind 25, nr. 10, s. 3886-3897. https://doi.org/10.1109/JBHI.2021.3077271

APA

Yang, C., Guo, X., Zhu, M., Ibragimov, B., & Yuan, Y. (2021). Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation. IEEE Journal of Biomedical and Health Informatics, 25(10), 3886-3897. https://doi.org/10.1109/JBHI.2021.3077271

Vancouver

Yang C, Guo X, Zhu M, Ibragimov B, Yuan Y. Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation. IEEE Journal of Biomedical and Health Informatics. 2021;25(10):3886-3897. https://doi.org/10.1109/JBHI.2021.3077271

Author

Yang, Chen ; Guo, Xiaoqing ; Zhu, Meilu ; Ibragimov, Bulat ; Yuan, Yixuan. / Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation. I: IEEE Journal of Biomedical and Health Informatics. 2021 ; Bind 25, Nr. 10. s. 3886-3897.

Bibtex

@article{c489aaf3ec0c4a7e9b39cd1834f06ac2,
title = "Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation",
abstract = "Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods. ",
keywords = "domain adaptation, Polyp segmentation, prototype, reconstruction, self-training",
author = "Chen Yang and Xiaoqing Guo and Meilu Zhu and Bulat Ibragimov and Yixuan Yuan",
note = "Publisher Copyright: {\textcopyright} 2013 IEEE.",
year = "2021",
doi = "10.1109/JBHI.2021.3077271",
language = "English",
volume = "25",
pages = "3886--3897",
journal = "IEEE Journal of Biomedical and Health Informatics",
issn = "2168-2194",
publisher = "Institute of Electrical and Electronics Engineers",
number = "10",

}

RIS

TY - JOUR

T1 - Mutual-Prototype Adaptation for Cross-Domain Polyp Segmentation

AU - Yang, Chen

AU - Guo, Xiaoqing

AU - Zhu, Meilu

AU - Ibragimov, Bulat

AU - Yuan, Yixuan

N1 - Publisher Copyright: © 2013 IEEE.

PY - 2021

Y1 - 2021

N2 - Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods.

AB - Accurate segmentation of the polyps from colonoscopy images provides useful information for the diagnosis and treatment of colorectal cancer. Despite deep learning methods advance automatic polyp segmentation, their performance often degrades when applied to new data acquired from different scanners or sequences (target domain). As manual annotation is tedious and labor-intensive for new target domain, leveraging knowledge learned from the labeled source domain to promote the performance in the unlabeled target domain is highly demanded. In this work, we propose a mutual-prototype adaptation network to eliminate domain shifts in multi-centers and multi-devices colonoscopy images. We first devise a mutual-prototype alignment (MPA) module with the prototype relation function to refine features through self-domain and cross-domain information in a coarse-to-fine process. Then two auxiliary modules: progressive self-training (PST) and disentangled reconstruction (DR) are proposed to improve the segmentation performance. The PST module selects reliable pseudo labels through a novel uncertainty guided self-training loss to obtain accurate prototypes in the target domain. The DR module reconstructs original images jointly utilizing prediction results and private prototypes to maintain semantic consistency and provide complement supervision information. We extensively evaluate the proposed model in polyp segmentation performance on three conventional colonoscopy datasets: CVC-DB, Kvasir-SEG, and ETIS-Larib. The comprehensive experimental results demonstrate that the proposed model outperforms state-of-the-art methods.

KW - domain adaptation

KW - Polyp segmentation

KW - prototype

KW - reconstruction

KW - self-training

UR - http://www.scopus.com/inward/record.url?scp=85105869408&partnerID=8YFLogxK

U2 - 10.1109/JBHI.2021.3077271

DO - 10.1109/JBHI.2021.3077271

M3 - Journal article

C2 - 33945490

AN - SCOPUS:85105869408

VL - 25

SP - 3886

EP - 3897

JO - IEEE Journal of Biomedical and Health Informatics

JF - IEEE Journal of Biomedical and Health Informatics

SN - 2168-2194

IS - 10

ER -

ID: 284635834