Learn Before
Probabilistic Graphical Models (PGM)
One prominent example of alternative theoretical motivations for the GNN framework is the motivation of GNNs based on connections to variational inference in probabilistic graphical models (PGMs).
In this probabilistic perspective, we view the embeddings for each node as latent variables that we attempt to infer. Assume that we observe the graph structure and the input node features X, and the goal is to infer the underlying latent variables hat can explain this observed data. Then the message passing operation that underlies GNNs can be viewed as a neural network analogue of certain message passing algorithms that are commonly used for variational inference to infer distributions over latent variables.
0
1
Tags
Data Science
Related
Node Embeddings
Graph Representation Learning by William Hamilton
Neighborhood Overlap Detection
K-Clustering of Graph Nodes
Graph Data structure
Machine Learning on Graphs
Graph Statistics and Kernel Methods
Generalized neighborhood aggregation: Set aggregators
Graph Neural Networks (GNNs)
Probabilistic Graphical Models (PGM)
Adversarial Approaches: Generative adversarial networks (GANs)
Deep Generative Models
Recurrent Models for Graph Generation
Traditional Graph Generation
Key Area For Future Graph Model Development