
Sign up to save your podcasts
Or
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
4.2
7272 ratings
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
43,917 Listeners
11,133 Listeners
1,069 Listeners
77,562 Listeners
483 Listeners
592 Listeners
202 Listeners
298 Listeners
260 Listeners
266 Listeners
190 Listeners
2,524 Listeners
35 Listeners
2,979 Listeners
5,422 Listeners