
Sign up to save your podcasts
Or
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Comparing Rewinding and Fine-tuning in Neural Network Pruning
4.2
7272 ratings
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Comparing Rewinding and Fine-tuning in Neural Network Pruning
43,917 Listeners
11,133 Listeners
1,069 Listeners
77,562 Listeners
483 Listeners
592 Listeners
202 Listeners
298 Listeners
260 Listeners
266 Listeners
190 Listeners
2,524 Listeners
35 Listeners
2,979 Listeners
5,422 Listeners