Linear Digressions

Neural Net Dropout


Listen Later

Neural networks are complex models with many parameters and can be prone to overfitting.  There's a surprisingly simple way to guard against this: randomly destroy connections between hidden units, also known as dropout.  It seems counterintuitive that undermining the structural integrity of the neural net makes it robust against overfitting, but in the world of neural nets, weirdness is just how things go sometimes.
Relevant links: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
...more
View all episodesView all episodes
Download on the App Store

Linear DigressionsBy Ben Jaffe and Katie Malone

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

353 ratings


More shows like Linear Digressions

View all
Stuff You Should Know by iHeartPodcasts

Stuff You Should Know

78,613 Listeners

Practical AI by Practical AI LLC

Practical AI

200 Listeners