Learn Before
Concept

Expensive to Use

Markov chain methods consist of repeatedly applying stochastic updates until eventually the state begins to yield samples from the equilibrium. However, samples drawn from the equilibrium distribution are identically distributed, any two successive samples will be highly correlated with each other. We should return only every n successive sample to avoid the bias of the correlation. Thus Markov chains require time to burn in to the equilibrium distribution and the time to transition from one sample to another reasonably decorrelated sample after reaching equilibrium.

0

1

Updated 2021-07-29

References


Tags

Data Science