Detecting oriented text in natural images by linking segments
Research output: Contribution to journal › Conference article › Research › peer-review
Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line; A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0% on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512×512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese.
Original language | English |
---|---|
Journal | Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
Pages (from-to) | 3482-3490 |
Number of pages | 9 |
DOIs | |
Publication status | Published - 6 Nov 2017 |
Externally published | Yes |
Event | 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 - Honolulu, United States Duration: 21 Jul 2017 → 26 Jul 2017 |
Conference
Conference | 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 |
---|---|
Country | United States |
City | Honolulu |
Period | 21/07/2017 → 26/07/2017 |
Bibliographical note
Funding Information:
This work was supported in part by National Natural Science Foundation of China (61222308 and 61573160), a Google Focused Research Award, AWS Cloud Credits for Research, a Microsoft Research Award and a Facebook equipment donation. The authors also thank China Scholarship Council (CSC) for supporting this work
Publisher Copyright:
© 2017 IEEE.
ID: 301827309