
Sign up to save your podcasts
Or


Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
By Paul Middlebrooks4.8
134134 ratings
Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:

2,674 Listeners

26,313 Listeners

2,460 Listeners

540 Listeners

247 Listeners

938 Listeners

4,176 Listeners

505 Listeners

208 Listeners

304 Listeners

97 Listeners

531 Listeners

26 Listeners

142 Listeners

265 Listeners