
Sign up to save your podcasts
Or


As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Comparing Rewinding and Fine-tuning in Neural Network Pruning
By Francesco Gadaleta4.2
7272 ratings
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io
Comparing Rewinding and Fine-tuning in Neural Network Pruning

4,026 Listeners

26,380 Listeners

755 Listeners

628 Listeners

12,134 Listeners

6,461 Listeners

305 Listeners

113,219 Listeners

56,957 Listeners

14 Listeners

4,024 Listeners

8,036 Listeners

211 Listeners

6,466 Listeners

16,524 Listeners