
Sign up to save your podcasts
Or


Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
By Francesco Gadaleta4.2
7272 ratings
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack

891 Listeners

1,639 Listeners

622 Listeners

585 Listeners

413 Listeners

303 Listeners

99 Listeners

9,159 Listeners

207 Listeners

306 Listeners

5,509 Listeners

227 Listeners

611 Listeners

181 Listeners

1,086 Listeners