On feature normalization and data augmentation
Research output: Contribution to journal › Conference article › Research › peer-review
The moments (a.k.a., mean and standard deviation) of latent features are often removed as noise when training image recognition models, to increase stability and reduce training time. However, in the field of image generation, the moments play a much more central role. Studies have shown that the moments extracted from instance normalization and positional normalization can roughly capture style and shape information of an image. Instead of being discarded, these moments are instrumental to the generation process. In this paper we propose Moment Exchange, an implicit data augmentation method that encourages the model to utilize the moment information also for recognition models. Specifically, we replace the moments of the learned features of one training image by those of another, and also interpolate the target labels-forcing the model to extract training signal from the moments in addition to the normalized features. As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation approaches. We demonstrate its efficacy across several recognition benchmark data sets where it improves the generalization capability of highly competitive baseline networks with remarkable consistency.
Original language | English |
---|---|
Journal | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
Pages (from-to) | 12378-12387 |
Number of pages | 10 |
ISSN | 1063-6919 |
DOIs | |
Publication status | Published - 2021 |
Externally published | Yes |
Event | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 - Virtual, Online, United States Duration: 19 Jun 2021 → 25 Jun 2021 |
Conference
Conference | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 |
---|---|
Country | United States |
City | Virtual, Online |
Period | 19/06/2021 → 25/06/2021 |
Bibliographical note
Funding Information:
This research is supported in part by the grants from Facebook, DARPA, the National Science Foundation (III-1618134, III-1526012, IIS1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), Bill and Melinda Gates Foundation. We are thankful for generous support by Zillow and SAP America Inc. Facebook has no collaboration with the other sponsors of this project. In particular, we appreciate the valuable discussion with Gao Huang.
Funding Information:
This research is supportedin part by the grants from Facebook, DARPA, theNationalScienceFoundation(III-1618134,III-1526012,IIS1149882,IIS-1724282,and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), BillandMelindaGatesFoundation. We are thankful for generous support by Zillow and SAP AmericaInc. Facebookhasnocollaborationwiththeother sponsorsofthisproject. Inparticular, weappreciatethe valuable discussionwithGaoHuang.
Publisher Copyright:
© 2021 IEEE
ID: 301813982