
Sign up to save your podcasts
Or


Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627.
By Sam Charrington4.7
419419 ratings
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627.

479 Listeners

1,085 Listeners

168 Listeners

302 Listeners

332 Listeners

210 Listeners

199 Listeners

95 Listeners

505 Listeners

135 Listeners

225 Listeners

607 Listeners

25 Listeners

35 Listeners

39 Listeners