
Sign up to save your podcasts
Or


This episode explores MDPs, covering stochastic environments, transition functions, reward functions, policies, value iteration, policy iteration, expected utility, finite vs. infinite horizons, discount factors, etc.
Disclosure: This episode was generated using NotebookLM by uploading Professor Chris Callison-Burch's lecture notes and slides.
By Chris Callison-BurchThis episode explores MDPs, covering stochastic environments, transition functions, reward functions, policies, value iteration, policy iteration, expected utility, finite vs. infinite horizons, discount factors, etc.
Disclosure: This episode was generated using NotebookLM by uploading Professor Chris Callison-Burch's lecture notes and slides.