Learn Before
Relation

Training Deep Belief Network

  • We start by training an RBM to maximize Evpdatalogp(v)\mathbb{E}_{v\sim p_{data}}\mathbf{log}p(v) using contrastive divergence or stochastic maximum likelihood.
  • Second RBM is trained to approximately maximize EvpdataEh(1)p(1)(h(1)v)logp(2)(h(1))\mathbb{E}_{v\sim p_{data}}\mathbb{E}_{h^{(1)}\sim p^{(1)}(h^{(1)}|v)}\mathbf{log}p^{(2)}(h^{(1)})
  • p(1)p^{(1)} --> the probability distribution represented by the first RBM
  • p(2)p^{(2)} --> the probability distribution represented by the second RBM

0

1

Updated 2021-07-22

References


Tags

Data Science

Learn After