Bounding the bias of contrastive divergence learning

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Optimization based on k-step contrastive divergence (CD) has become a common way to train restricted Boltzmann machines (RBMs). The k-step CD is a biased estimator of the log-likelihood gradient relying on Gibbs sampling. We derive a new upper bound for this bias. Its magnitude depends on k, the number of variables in the RBM, and the maximum change in energy that can be produced by changing a single variable. The last reflects the dependence on the absolute values of the RBM parameters. The magnitude of the bias is also affected by the distance in variation between the modeled distribution and the starting distribution of the Gibbs chain.
TidsskriftNeural Computation
Udgave nummer3
Sider (fra-til)664-673
Antal sider10
StatusUdgivet - 2011

ID: 32089131