
Sign up to save your podcasts
Or
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627.
4.7
412412 ratings
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627.
161 Listeners
474 Listeners
295 Listeners
321 Listeners
147 Listeners
196 Listeners
275 Listeners
90 Listeners
97 Listeners
104 Listeners
193 Listeners
64 Listeners
420 Listeners
28 Listeners
31 Listeners