
Sign up to save your podcasts
Or


Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
The complete show notes for this episode can be found at twimlai.com/go/442
By Sam Charrington4.7
422422 ratings
Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
The complete show notes for this episode can be found at twimlai.com/go/442

1,105 Listeners

168 Listeners

305 Listeners

345 Listeners

233 Listeners

209 Listeners

205 Listeners

314 Listeners

100 Listeners

552 Listeners

148 Listeners

102 Listeners

229 Listeners

688 Listeners

34 Listeners