Learn Before
Concept

Over-Smoothing As A Low-Pass Convolutional Filter

Suppose we have a simplified GNN: H(k)=AsymH(k1)W(k)=AsymkXW\mathbf{H}^{(k)} = \mathbf{A}_{sym} \mathbf{H}^{(k-1)} \mathbf{W}^{(k)} = \mathbf{A}_{sym}^k \mathbf{X} \mathbf{W}

If k is big enough such that we reach an fix point, then we’ll have AsymHk=Hk\mathbf{A}_{sym} \mathbf{H}^k = \mathbf{H}^k. At this fixed point, all nodes will converge to be defined by the dominant eigenvector of Asym\mathbf{A}_{sym}.

So we can see that stacking many rounds of message passing leads to low-pass convolutional filter, and all nodes can become to be identical and uninformative.

0

1

Updated 2022-07-17

Contributors are:

Who are from:

Tags

Deep Learning (in Machine learning)

Data Science