Regularization in network optimization via trimmed stochastic gradient descent with noisy label
Publikation: Bidrag til tidsskrift › Tidsskriftartikel › Forskning › fagfællebedømt
Standard
Regularization in network optimization via trimmed stochastic gradient descent with noisy label. / Nakamura, Kensuke; Sohn, Bong Soo; Won, Kyoung Jae; Hong, Byung Woo.
I: IEEE Access, Bind 10, 2022, s. 34706-34715.Publikation: Bidrag til tidsskrift › Tidsskriftartikel › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Regularization in network optimization via trimmed stochastic gradient descent with noisy label
AU - Nakamura, Kensuke
AU - Sohn, Bong Soo
AU - Won, Kyoung Jae
AU - Hong, Byung Woo
N1 - Publisher Copyright: Author
PY - 2022
Y1 - 2022
N2 - Regularization is essential for avoiding over-fitting to training data in network optimization, leading to better generalization of the trained networks. The label noise provides a strong implicit regularization by replacing the target ground truth labels of training examples by uniform random labels. However, it can cause undesirable misleading gradients due to the large loss associated with incorrect labels. We propose a first-order optimization method (Label-Noised Trim-SGD) that uses the label noise with the example trimming in order to remove the outliers based on the loss. The proposed algorithm is simple yet enables us to impose a large label-noise and obtain a better regularization effect than the original methods. The quantitative analysis is performed by comparing the behavior of the label noise, the example trimming, and the proposed algorithm. We also present empirical results that demonstrate the effectiveness of our algorithm using the major benchmarks and the fundamental networks, where our method has successfully outperformed the state-of-the-art optimization methods.
AB - Regularization is essential for avoiding over-fitting to training data in network optimization, leading to better generalization of the trained networks. The label noise provides a strong implicit regularization by replacing the target ground truth labels of training examples by uniform random labels. However, it can cause undesirable misleading gradients due to the large loss associated with incorrect labels. We propose a first-order optimization method (Label-Noised Trim-SGD) that uses the label noise with the example trimming in order to remove the outliers based on the loss. The proposed algorithm is simple yet enables us to impose a large label-noise and obtain a better regularization effect than the original methods. The quantitative analysis is performed by comparing the behavior of the label noise, the example trimming, and the proposed algorithm. We also present empirical results that demonstrate the effectiveness of our algorithm using the major benchmarks and the fundamental networks, where our method has successfully outperformed the state-of-the-art optimization methods.
KW - Data models
KW - data trimming
KW - label noise
KW - Loss measurement
KW - network optimization
KW - Neural networks
KW - Noise measurement
KW - Optimization
KW - regularization
KW - Stochastic processes
KW - Training
U2 - 10.1109/ACCESS.2022.3171910
DO - 10.1109/ACCESS.2022.3171910
M3 - Journal article
AN - SCOPUS:85129585599
VL - 10
SP - 34706
EP - 34715
JO - IEEE Access
JF - IEEE Access
SN - 2169-3536
ER -
ID: 307328329