Latent Variable Models
- Is an autoencoder a generative model?
Variational Auto-encoder
- Variational inference(VI)
- The goal of VI is to optimize the variational distribution that best matches the posterior distribution
- Posterior distribution : $p_{\theta}(z|x)$
- Prior Distribution + Likelihood function(”new evidence”)
- Variational distribution : $q_{\phi}(z|x)$
- $p_{\theta}(z|x)$를 구하는 것이 거의 불가능 하기 때문에 학습할 수 있는 것으로 근사하겠다! → $q_{\phi}(z|x)$
- In particular, we want to find the variational distribution that minimizes the KL divergence between the true posterior

- How do we optimize the Variational distribution if we don’t know the Posterior distribution? → by using the Evidence Lower Bound(ELBO)


- Key Limitation:
- Intractable (hard to evaluate likelihood)
- prior fitting term must be differentiable, hence it is hard to use diverse latent prior distributions → why mostly use isotropic Gaussian

Adversarial Auto-encoder

- It allows us to use any arbitrary latent distributions that we can sample
Generative Adversarial Network

VAE vs GAN