Logout succeed
Logout succeed. See you again!

Towards Principled Methods for Training Generative Adversarial Networks PDF
Preview Towards Principled Methods for Training Generative Adversarial Networks
Towards Principled Methods for Training Generative Adversarial Networks Martin Arjovsky & Léon Bottou Unsupervised learning - We have samples from an unknown distribution Unsupervised learning - We have samples from an unknown distribution - We want to approximate it by a parametric distribution that’s close to in some sense. Unsupervised learning - We have samples from an unknown distribution - We want to approximate it by a parametric distribution that’s close to in some sense. - Close how? Maximum Likelihood - Maximum likelihood: Maximum Likelihood - Maximum likelihood: - Assumptions: continuous with full support. Maximum Likelihood - Maximum likelihood: - Assumptions: continuous with full support. - Problems: restricted capacity distributes mass. Modeling low dimensional distributions is impossible. Kullback-Leibler Divergence - Closeness measured by KL divergence (equivalent to ML): Kullback-Leibler Divergence - Closeness measured by KL divergence (equivalent to ML): - When integrand goes to infinity: high cost for mode dropping. Kullback-Leibler Divergence - Closeness measured by KL divergence (equivalent to ML): - When integrand goes to infinity: high cost for mode dropping. - When integrand goes to 0: low cost for fake looking samples.