The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

08.19.2019 - By Sam CharringtonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."

More episodes from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)