Language-driven Semantic Segmentation

Publikation: Working paperPreprintForskning

Standard

Language-driven Semantic Segmentation. / Li, Boyi; Weinberger, Killian Q.; Belongie, Serge; Koltun, Vladlen; Ranftl, René.

arXiv.org, 2022.

Publikation: Working paperPreprintForskning

Harvard

Li, B, Weinberger, KQ, Belongie, S, Koltun, V & Ranftl, R 2022 'Language-driven Semantic Segmentation' arXiv.org. <https://arxiv.org/pdf/2201.03546.pdf>

APA

Li, B., Weinberger, K. Q., Belongie, S., Koltun, V., & Ranftl, R. (2022). Language-driven Semantic Segmentation. arXiv.org. https://arxiv.org/pdf/2201.03546.pdf

Vancouver

Li B, Weinberger KQ, Belongie S, Koltun V, Ranftl R. Language-driven Semantic Segmentation. arXiv.org. 2022.

Author

Li, Boyi ; Weinberger, Killian Q. ; Belongie, Serge ; Koltun, Vladlen ; Ranftl, René. / Language-driven Semantic Segmentation. arXiv.org, 2022.

Bibtex

@techreport{0fdfe60f54ac419e89c8f2fddac988de,
title = "Language-driven Semantic Segmentation",
abstract = "We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., {"}grass{"} or {"}building{"}) together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. The text embeddings provide a flexible label representation in which semantically similar labels map to similar regions in the embedding space (e.g., {"}cat{"} and {"}furry{"}). This allows LSeg to generalize to previously unseen categories at test time, without retraining or even requiring a single additional training sample. We demonstrate that our approach achieves highly competitive zero-shot performance compared to existing zero- and few-shot semantic segmentation methods, and even matches the accuracy of traditional segmentation algorithms when a fixed label set is provided. Code and demo are available at https://github.com/isl-org/lang-seg.",
author = "Boyi Li and Weinberger, {Killian Q.} and Serge Belongie and Vladlen Koltun and Ren{\'e} Ranftl",
year = "2022",
language = "English",
publisher = "arXiv.org",
type = "WorkingPaper",
institution = "arXiv.org",

}

RIS

TY - UNPB

T1 - Language-driven Semantic Segmentation

AU - Li, Boyi

AU - Weinberger, Killian Q.

AU - Belongie, Serge

AU - Koltun, Vladlen

AU - Ranftl, René

PY - 2022

Y1 - 2022

N2 - We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., "grass" or "building") together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. The text embeddings provide a flexible label representation in which semantically similar labels map to similar regions in the embedding space (e.g., "cat" and "furry"). This allows LSeg to generalize to previously unseen categories at test time, without retraining or even requiring a single additional training sample. We demonstrate that our approach achieves highly competitive zero-shot performance compared to existing zero- and few-shot semantic segmentation methods, and even matches the accuracy of traditional segmentation algorithms when a fixed label set is provided. Code and demo are available at https://github.com/isl-org/lang-seg.

AB - We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., "grass" or "building") together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. The text embeddings provide a flexible label representation in which semantically similar labels map to similar regions in the embedding space (e.g., "cat" and "furry"). This allows LSeg to generalize to previously unseen categories at test time, without retraining or even requiring a single additional training sample. We demonstrate that our approach achieves highly competitive zero-shot performance compared to existing zero- and few-shot semantic segmentation methods, and even matches the accuracy of traditional segmentation algorithms when a fixed label set is provided. Code and demo are available at https://github.com/isl-org/lang-seg.

UR - https://arxiv.org/abs/2201.03546

M3 - Preprint

BT - Language-driven Semantic Segmentation

PB - arXiv.org

ER -

ID: 303686723