Learn Before
Concept

Continuous Latent Variables in Variational Inference and Learning

When continuous latent variables are present in our graphical model, one can still use variational inference and learning by maximizing L, where L or L (v, θ, q) represents the evidence lower bound (ELBO) ; however, unlike with discrete latent variables we must now use calculus of variations.

When dealing with continuous latent variables the Mean Field Approximation: q(hv)=iq(hiv)q(\textbf{h}|\textbf{v}) = \prod_{i} q(h_{i}|\textbf{v}) is fixed at q(hjv)q(h_{j}|\textbf{v}) for all j =/= i making the optimal q(hiv)q(h_{i}|\textbf{v}) is obtained by normalizing the following unnormalized distribution: q~(hiv)=exp(Ehi  q(hiv)logp~(v,h))\tilde{q}(h_{i}|\textbf{v})=exp(E_{h-i~ ~ q(h-i|v)} \log{\tilde{p}(\textbf{v},\textbf{h})}) as long as p does not assign 0 probability to any joint configuration of variables.

0

1

Updated 2021-07-22

References


Tags

Data Science