Video text detection and recognition: Dataset and benchmark
Publikation: Bidrag til tidsskrift › Konferenceartikel › Forskning › fagfællebedømt
Standard
Video text detection and recognition : Dataset and benchmark. / Nguyen, Phuc Xuan; Wang, Kai; Belongie, Serge.
I: 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014, 2014, s. 776-783.Publikation: Bidrag til tidsskrift › Konferenceartikel › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Video text detection and recognition
T2 - 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014
AU - Nguyen, Phuc Xuan
AU - Wang, Kai
AU - Belongie, Serge
PY - 2014
Y1 - 2014
N2 - This paper focuses on the problem of text detection and recognition in videos. Even though text detection and recognition in images has seen much progress in recent years, relatively little work has been done to extend these solutions to the video domain. In this work, we extend an existing end-to-end solution for text recognition in natural images to video. We explore a variety of methods for training local character models and explore methods to capitalize on the temporal redundancy of text in video. We present detection performance using the Video Analysis and Content Extraction (VACE) benchmarking framework on the ICDAR 2013 Robust Reading Challenge 3 video dataset and on a new video text dataset. We also propose a new performance metric based on precision-recall curves to measure the performance of text recognition in videos. Using this metric, we provide early video text recognition results on the above mentioned datasets.
AB - This paper focuses on the problem of text detection and recognition in videos. Even though text detection and recognition in images has seen much progress in recent years, relatively little work has been done to extend these solutions to the video domain. In this work, we extend an existing end-to-end solution for text recognition in natural images to video. We explore a variety of methods for training local character models and explore methods to capitalize on the temporal redundancy of text in video. We present detection performance using the Video Analysis and Content Extraction (VACE) benchmarking framework on the ICDAR 2013 Robust Reading Challenge 3 video dataset and on a new video text dataset. We also propose a new performance metric based on precision-recall curves to measure the performance of text recognition in videos. Using this metric, we provide early video text recognition results on the above mentioned datasets.
UR - http://www.scopus.com/inward/record.url?scp=84904675660&partnerID=8YFLogxK
U2 - 10.1109/WACV.2014.6836024
DO - 10.1109/WACV.2014.6836024
M3 - Conference article
AN - SCOPUS:84904675660
SP - 776
EP - 783
JO - 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014
JF - 2014 IEEE Winter Conference on Applications of Computer Vision, WACV 2014
Y2 - 24 March 2014 through 26 March 2014
ER -
ID: 302044488