
Sign up to save your podcasts
Or


Audio note: this article contains 363 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
Introduction
In this post, we describe a generalization of Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) called crisp supra-MDPs and supra-POMDPs. The new feature of these decision processes is that the stochastic transition dynamics are multivalued, i.e. specified by credal sets. We describe how supra-MDPs give rise to crisp causal laws, the hypotheses of infra-Bayesian reinforcement learning. Furthermore, we discuss how supra-MDPs can approximate MDPs by a coarsening of the state space. This coarsening allows an agent to be agnostic about the detailed dynamics while still having performance guarantees for the full MDP.
Analogously to the classical theory, we describe an algorithm to compute a Markov optimal policy for supra-MDPs with finite time [...]
---
Outline:
(00:22) Introduction
(01:41) Supra-Markov Decision Processes
(01:45) Defining supra-MDPs
(06:55) Supra-bandits
(08:02) Crisp causal laws arising from supra-MDPs and (supra-)RDPs
(19:25) Approximating an MDP with a supra-MDP
(22:42) Computing the Markov optimal policy for finite time horizons
(25:02) Existence of stationary optimal policy
(28:18) Interpreting supra-MDPs as stochastic games
(32:40) Regret bounds for episodic supra-MDPs
(34:16) Supra-Partially Observable Markov Decision Processes
(36:16) Equivalence of crisp causal laws and crisp supra-POMDPs
(37:18) Proof of Proposition 2
(41:21) Acknowledgements
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
__T3A_INLINE_LATEX_PLACEHOLDER___n___T3A_INLINE_LATEX_END_PLACEHOLDER__ for "do nothing." In some cases, the transition probabilities are precisely specified. In other cases, credal sets (notated here by intervals) over the two-element state space describe the transition probabilities." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrong
Audio note: this article contains 363 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
Introduction
In this post, we describe a generalization of Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) called crisp supra-MDPs and supra-POMDPs. The new feature of these decision processes is that the stochastic transition dynamics are multivalued, i.e. specified by credal sets. We describe how supra-MDPs give rise to crisp causal laws, the hypotheses of infra-Bayesian reinforcement learning. Furthermore, we discuss how supra-MDPs can approximate MDPs by a coarsening of the state space. This coarsening allows an agent to be agnostic about the detailed dynamics while still having performance guarantees for the full MDP.
Analogously to the classical theory, we describe an algorithm to compute a Markov optimal policy for supra-MDPs with finite time [...]
---
Outline:
(00:22) Introduction
(01:41) Supra-Markov Decision Processes
(01:45) Defining supra-MDPs
(06:55) Supra-bandits
(08:02) Crisp causal laws arising from supra-MDPs and (supra-)RDPs
(19:25) Approximating an MDP with a supra-MDP
(22:42) Computing the Markov optimal policy for finite time horizons
(25:02) Existence of stationary optimal policy
(28:18) Interpreting supra-MDPs as stochastic games
(32:40) Regret bounds for episodic supra-MDPs
(34:16) Supra-Partially Observable Markov Decision Processes
(36:16) Equivalence of crisp causal laws and crisp supra-POMDPs
(37:18) Proof of Proposition 2
(41:21) Acknowledgements
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
__T3A_INLINE_LATEX_PLACEHOLDER___n___T3A_INLINE_LATEX_END_PLACEHOLDER__ for "do nothing." In some cases, the transition probabilities are precisely specified. In other cases, credal sets (notated here by intervals) over the two-element state space describe the transition probabilities." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,383 Listeners

2,423 Listeners

8,233 Listeners

4,147 Listeners

92 Listeners

1,561 Listeners

9,829 Listeners

89 Listeners

489 Listeners

5,479 Listeners

16,097 Listeners

532 Listeners

133 Listeners

97 Listeners

509 Listeners