Learn Before
Concept

Hilbert Space Embeddings of Distributions

Let p(x)p(x) denote a probability density function defined over the random variable xRmx \in R^{m}. Given an arbitrary feature map ϕ:RmR\phi : R^{m} \rightarrow \textit{R}, we can represent the density p(x)p(x) based on its expected value under this feature map: μx=Rmϕ(x)p(x)dx\mu_{x}=\int_{R^{m}} \phi (x)p(x)dx.

The key idea with Hilbert space embeddings of distributions is that this equation will be injective, as long as a suitable feature map ϕ\phi is used. This means that μx\mu_{x} can serve as a sufficient statistic for p(x)p(x), and any computations we want to perform on p(x)p(x) can be equivalently represented as functions of the embedding μx\mu_{x}.

0

1

Updated 2022-07-15

Tags

Data Science