Papers Read on AI

Diffusion Models Beat GANs on Image Synthesis


Listen Later

We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for sample quality using gradients from a classifier. We match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models. We release our code at https://github.com/ openai/guided-diffusion.
2021: Prafulla Dhariwal, Alex Nichol
Ranked #1 on Image Generation on ImageNet 64x64 (FID metric)
https://arxiv.org/pdf/2105.05233v4.pdf
...more
View all episodesView all episodes
Download on the App Store

Papers Read on AIBy Rob

  • 3.7
  • 3.7
  • 3.7
  • 3.7
  • 3.7

3.7

3 ratings