
Sign up to save your podcasts
Or


This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn.
By Kyle Polich4.4
475475 ratings
This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn.

32,243 Listeners

30,635 Listeners

288 Listeners

1,107 Listeners

629 Listeners

583 Listeners

305 Listeners

345 Listeners

209 Listeners

205 Listeners

313 Listeners

100 Listeners

554 Listeners

102 Listeners

229 Listeners