Learn Before
Concept

Variational Inference and Learning

Inferences can be made by maximizing L\mathcal{L} with respect to qq and learning can be achieved by maximizing L\mathcal{L} with respect to θ\theta for the equation:

L(v,θ,q)=logp(v;θ)DKL(q(hv)p(hv))\mathcal{L}(v,\theta,q)= logp(v;\theta) - D_{KL}(q(h|v)||p(h|v)) L(v,θ,q)=Ehq(logp(h,v))+H(q)\mathcal{L}(v,\theta,q)= \mathbb{E}_{h\sim q}(logp(h,v))+H(q)

However, the calculations needed to maximize L\mathcal{L} with respect to θ\theta are often times intractible. Therefore, learning is performed on a restricted set of possible distributions for qq.

This set should allow for the easy computation of Eqlogp(h,v)\mathbb{E}_qlogp(h,v) in order to remain tractible.

0

1

Updated 2021-07-28

References


Tags

Data Science