
Sign up to save your podcasts
Or


Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
By Francesco Gadaleta4.2
7272 ratings
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack

4,026 Listeners

26,380 Listeners

755 Listeners

628 Listeners

12,134 Listeners

6,461 Listeners

305 Listeners

113,219 Listeners

56,957 Listeners

14 Listeners

4,024 Listeners

8,036 Listeners

211 Listeners

6,466 Listeners

16,524 Listeners