Learn Before
Concept

Propositions/Theorems of GAN's

  • Proposition 1: Optimal Discriminator DD for given generator GG is Dg=pdata(x)pdata(x)+pg(x)D_g^*=\frac {p_{data}(\bold x)} {p_{data}(\bold x) + p_g(\bold x)}.
  • Theorem 1: The global minimum of virtual training criterion C(G)C(G) is found if and only if pg=pdatap_g=p_{data}, having value log4-\log 4.
  • Proposition 2: pgp_g converges to pdatap_{data} if:
  1. GG and DD have enough capacity,
  2. At each step of algorithm 1 the discriminator is allowed to reach its optimum given GG,
  3. pgp_g is updated to improve the criterion Ex pdata[logDG(x)]+Ex pg[log(1DG(x)]\mathbb{E}_{\bold x ~ p_{data}}[\log D_G^*(\bold x)] + \mathbb{E}_{\bold x ~ p_g}[\log(1-D_G^*(\bold x)].

0

1

Updated 2021-08-12

Tags

Data Science