Learn Before
Concept

Probabilistic Graphical Models (PGM)

One prominent example of alternative theoretical motivations for the GNN framework is the motivation of GNNs based on connections to variational inference in probabilistic graphical models (PGMs).

In this probabilistic perspective, we view the embeddings zu,uVz_{u}, \forall u \in V for each node as latent variables that we attempt to infer. Assume that we observe the graph structure and the input node features X, and the goal is to infer the underlying latent variables hat can explain this observed data. Then the message passing operation that underlies GNNs can be viewed as a neural network analogue of certain message passing algorithms that are commonly used for variational inference to infer distributions over latent variables.

0

1

Updated 2022-07-15

Tags

Data Science