Relation

Log-Likelihood Gradient

What makes learning undirected models by maximum likelihood particularly difficult is that the partition function depends on the parameters:

∇θlog p(x; θ) = ∇θlog ˜p(x; θ) − ∇θlog Z(θ)

This identity is applicable only under certain regularity conditions :

∇θlog Z = Ex∼p(x)∇θlog ˜p(x)

The Monte Carlo approach to learning undirected models provides an intuitive framework in which we can consider both positive and negative phases

0

1

Updated 2021-07-22

References


Tags

Data Science