Have a look at a rather new training methodology for generative adversarial networks by a team of researchers from Nvidia.
The key idea here is to grow both the generator and discriminator progressively: “starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024²,” states the team.
The team also proposes a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, they described several implementation details that are important for discouraging unhealthy competition between the generator and discriminator.
Finally, they suggest a new metric for evaluating GAN results, both in terms of image quality and variation.
You can learn more and find the full paper here.