Authors
Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, Yi Zhang
Publication date
2017/7/17
Conference
International conference on machine learning
Pages
224-232
Publisher
PMLR
Description
It is shown that training of generative adversarial network (GAN) may not have good generalization properties; eg, training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+ GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.
Total citations
2017201820192020202120222023202439105971371351259947
Scholar articles
S Arora, R Ge, Y Liang, T Ma, Y Zhang - International conference on machine learning, 2017