Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Building a bird recognition app and large scale dataset with citizen scientists : The fine print in fine-grained dataset collection. / Van Horn, Grant; Branson, Steve; Farrell, Ryan; Haber, Scott; Barry, Jessie; Ipeirotis, Panos; Perona, Pietro; Belongie, Serge.
In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 14.10.2015, p. 595-604.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Building a bird recognition app and large scale dataset with citizen scientists
T2 - IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
AU - Van Horn, Grant
AU - Branson, Steve
AU - Farrell, Ryan
AU - Haber, Scott
AU - Barry, Jessie
AU - Ipeirotis, Panos
AU - Perona, Pietro
AU - Belongie, Serge
N1 - Publisher Copyright: © 2015 IEEE.
PY - 2015/10/14
Y1 - 2015/10/14
N2 - We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists - crowd annotators who are passionate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset containing 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird experts to measure the quality of popular datasets like CUB-200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algorithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately measure the performance of fine-grained computer vision systems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.
AB - We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists - crowd annotators who are passionate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset containing 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird experts to measure the quality of popular datasets like CUB-200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algorithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately measure the performance of fine-grained computer vision systems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.
UR - http://www.scopus.com/inward/record.url?scp=84959195964&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2015.7298658
DO - 10.1109/CVPR.2015.7298658
M3 - Conference article
AN - SCOPUS:84959195964
SP - 595
EP - 604
JO - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings
JF - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings
SN - 1063-6919
Y2 - 7 June 2015 through 12 June 2015
ER -
ID: 301829133