Data Science at Home

How to improve the stability of training a GAN (Ep. 88)


Listen Later

Generative Adversarial Networks or GANs are very powerful tools to generate data. However, training a GAN is not easy. More specifically, GANs suffer of three major issues such as instability of the training procedure, mode collapse and vanishing gradients.

 

In this episode I not only explain the most challenging issues one would encounter while designing and training Generative Adversarial Networks. But also some methods and architectures to mitigate them. In addition I elucidate the three specific strategies that researchers are considering to improve the accuracy and the reliability of GANs.

 

The most tedious issues of GANs

 

Convergence to equilibrium

 

A typical GAN is formed by at least two networks: a generator G and a discriminator D. The generator's task is to generate samples from random noise. In turn, the discriminator has to learn to distinguish fake samples from real ones. While it is theoretically possible that generators and discriminators converge to a Nash Equilibrium (at which both networks are in their optimal state), reaching such equilibrium is not easy. 

 

Vanishing gradients

 

Moreover, a very accurate discriminator would push the loss function towards lower and lower values. This in turn, might cause the gradient to vanish and the entire network to stop learning completely. 

 

Mode collapse

 

Another phenomenon that is easy to observe when dealing with GANs is mode collapse. That is the incapability of the model to generate diverse samples. This in turn, leads to generated data that are more and more similar to the previous ones. Hence, the entire generated dataset would be just concentrated around a particular statistical value. 

 

The solution

 

Researchers have taken into consideration several approaches to overcome such issues. They have been playing with architectural changes, different loss functions and game theory.

 

Listen to the full episode to know more about the most effective strategies to build GANs that are reliable and robust.

Don't forget to join the conversation on our new Discord channel. See you there!

 

...more
View all episodesView all episodes
Download on the App Store

Data Science at HomeBy Francesco Gadaleta

  • 4.2
  • 4.2
  • 4.2
  • 4.2
  • 4.2

4.2

72 ratings


More shows like Data Science at Home

View all
On Point with Meghna Chakrabarti by WBUR

On Point with Meghna Chakrabarti

4,022 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,380 Listeners

Nature Podcast by Springer Nature Limited

Nature Podcast

756 Listeners

Software Engineering Daily by Software Engineering Daily

Software Engineering Daily

626 Listeners

Science Vs by Spotify Studios

Science Vs

12,130 Listeners

Science Friday by Science Friday and WNYC Studios

Science Friday

6,467 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

306 Listeners

The Daily by The New York Times

The Daily

113,121 Listeners

Up First from NPR by NPR

Up First from NPR

56,944 Listeners

The Atlantic Interview by The Atlantic

The Atlantic Interview

14 Listeners

Modern Wisdom by Chris Williamson

Modern Wisdom

4,025 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

8,043 Listeners

Practical AI by Practical AI LLC

Practical AI

212 Listeners

Consider This from NPR by NPR

Consider This from NPR

6,462 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,525 Listeners