
Sign up to save your podcasts
Or


Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
The complete show notes for this episode can be found at twimlai.com/go/442
By Sam Charrington4.7
422422 ratings
Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
The complete show notes for this episode can be found at twimlai.com/go/442

1,096 Listeners

172 Listeners

302 Listeners

346 Listeners

224 Listeners

202 Listeners

209 Listeners

305 Listeners

97 Listeners

531 Listeners

138 Listeners

93 Listeners

228 Listeners

629 Listeners

34 Listeners