
Sign up to save your podcasts
Or
Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
4.9
133133 ratings
Support the Podcast
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i.e. deep network theory. We talk about what he’s learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when it’s not), how semantics develop, and what it all might tell us about deep learning in brains.
Show notes:
A few recommended texts to dive deeper:
1,581 Listeners
242 Listeners
15,041 Listeners
496 Listeners
308 Listeners
1,041 Listeners
919 Listeners
4,137 Listeners
487 Listeners
88 Listeners
386 Listeners
460 Listeners
128 Listeners
498 Listeners
243 Listeners