
Sign up to save your podcasts
Or
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Comparing Rewinding and Fine-tuning in Neural Network Pruning
4.2
7272 ratings
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Comparing Rewinding and Fine-tuning in Neural Network Pruning
43,843 Listeners
11,267 Listeners
1,063 Listeners
77,233 Listeners
474 Listeners
584 Listeners
200 Listeners
295 Listeners
249 Listeners
267 Listeners
196 Listeners
2,537 Listeners
42 Listeners
2,820 Listeners
5,367 Listeners