Concept

Leicht, Holme, and Newman (LHN) Similarity

LHN similarity is defined as:

SLNH[u,v]=I[u,v]+2mdudvi=1βiλ11iAi[u,v]\mathbf{S}_{LNH} [u,v]=\mathbf{I}[u,v]+\frac{2m}{d_ud_v}\sum_{i=1}^{\infty}\beta^i \lambda_1^{1-i} \mathbf{A}^i [u,v]

where dud_u and dvd_v are degree of nodes uu and vv, and λ1\lambda_1 is the largest eigenvalue, and mm is total number of edges in the graph. And it can be proved that:

SLNH=2αmλ1D1(Iβλ1A)1D\mathbf{S}_{LNH}=2\alpha m\lambda_1\mathbf{D}^{-1}(\mathbf{I}-\frac{\beta}{\lambda_1} \mathbf{A})^{-1}\mathbf{D}

The idea of LHN is that, becasue Katz similarity gives much more higher scores for high degree nodes, it wants to solve this issue by normalizing actual number of observed path using expected number of path, whcich is AiE[Ai]\frac{\mathbf{A}^i}{\mathbb{E}[\mathbf{A}^i]}. And E[Ai]\mathbb{E}[\mathbf{A}^i] can be estimated through du,dvd_u, d_v and λ1\lambda_1.

0

1

Updated 2022-06-26

Contributors are:

Who are from:

Tags

Deep Learning (in Machine learning)

Data Science