
Sign up to save your podcasts
Or


Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627.
By Sam Charrington4.7
422422 ratings
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627.

1,109 Listeners

168 Listeners

307 Listeners

345 Listeners

233 Listeners

209 Listeners

204 Listeners

313 Listeners

101 Listeners

554 Listeners

146 Listeners

103 Listeners

229 Listeners

688 Listeners

34 Listeners