
Sign up to save your podcasts
Or
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
4.2
7272 ratings
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
43,835 Listeners
11,280 Listeners
1,060 Listeners
77,235 Listeners
474 Listeners
585 Listeners
200 Listeners
295 Listeners
253 Listeners
267 Listeners
196 Listeners
2,538 Listeners
42 Listeners
2,824 Listeners
5,364 Listeners