Data Science at Home

How to improve the stability of training a GAN (Ep. 88)


Listen Later

Generative Adversarial Networks or GANs are very powerful tools to generate data. However, training a GAN is not easy. More specifically, GANs suffer of three major issues such as instability of the training procedure, mode collapse and vanishing gradients.

 

In this episode I not only explain the most challenging issues one would encounter while designing and training Generative Adversarial Networks. But also some methods and architectures to mitigate them. In addition I elucidate the three specific strategies that researchers are considering to improve the accuracy and the reliability of GANs.

 

The most tedious issues of GANs

 

Convergence to equilibrium

 

A typical GAN is formed by at least two networks: a generator G and a discriminator D. The generator's task is to generate samples from random noise. In turn, the discriminator has to learn to distinguish fake samples from real ones. While it is theoretically possible that generators and discriminators converge to a Nash Equilibrium (at which both networks are in their optimal state), reaching such equilibrium is not easy. 

 

Vanishing gradients

 

Moreover, a very accurate discriminator would push the loss function towards lower and lower values. This in turn, might cause the gradient to vanish and the entire network to stop learning completely. 

 

Mode collapse

 

Another phenomenon that is easy to observe when dealing with GANs is mode collapse. That is the incapability of the model to generate diverse samples. This in turn, leads to generated data that are more and more similar to the previous ones. Hence, the entire generated dataset would be just concentrated around a particular statistical value. 

 

The solution

 

Researchers have taken into consideration several approaches to overcome such issues. They have been playing with architectural changes, different loss functions and game theory.

 

Listen to the full episode to know more about the most effective strategies to build GANs that are reliable and robust.

Don't forget to join the conversation on our new Discord channel. See you there!

 

...more
View all episodesView all episodes
Download on the App Store

Data Science at HomeBy Francesco Gadaleta

  • 4.2
  • 4.2
  • 4.2
  • 4.2
  • 4.2

4.2

72 ratings


More shows like Data Science at Home

View all
More or Less by BBC Radio 4

More or Less

891 Listeners

WSJ Tech News Briefing by The Wall Street Journal

WSJ Tech News Briefing

1,639 Listeners

Software Engineering Daily by Software Engineering Daily

Software Engineering Daily

622 Listeners

Talk Python To Me by Michael Kennedy

Talk Python To Me

585 Listeners

BBC Inside Science by BBC Radio 4

BBC Inside Science

413 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

FT Tech Tonic by Financial Times

FT Tech Tonic

99 Listeners

Worklife with Adam Grant by TED

Worklife with Adam Grant

9,159 Listeners

Practical AI by Practical AI LLC

Practical AI

207 Listeners

Last Week in AI by Skynet Today

Last Week in AI

306 Listeners

Hard Fork by The New York Times

Hard Fork

5,509 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

227 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

611 Listeners

Unhedged by Financial Times & Pushkin Industries

Unhedged

181 Listeners

The Last Invention by Longview

The Last Invention

1,086 Listeners