To assess the difference between real and synthetic data, Generative Adversarial Networks (GANs) are trained using a distribution discrepancy measure. Three widely employed measures are information-theoretic divergences, integral probability metrics, and Hilbert space discrepancy metrics. We elucidate the theoretical connections between these three popular GAN training criteria and propose a novel procedure, called x2-GAN, that is conceptually simple, stable at training and resistant to mode collapse. Our procedure naturally generalizes to address the problem of simultaneous matching of multiple distributions. Further, we propose a resampling strategy that significantly improves sample quality, by repurpos-ing the trained critic function via an importance weighting mechanism. Experiments show that the proposed procedure improves stability and convergence, and yields state-of-art results on a wide range of generative modeling tasks.
CITATION STYLE
Tao, C., Chen, L., Henao, R., Feng, J., & Carin, L. (2018). X2 generative adversarial network. In 35th International Conference on Machine Learning, ICML 2018 (Vol. 11, pp. 7787–7796). International Machine Learning Society (IMLS). https://doi.org/10.1201/9781003005629-7
Mendeley helps you to discover research relevant for your work.