
Sign up to save your podcasts
Or
Diffractor is the first author of this paper.
Official title: "Regret Bounds for Robust Online Decision Making"
Abstract: We propose a framework which generalizes "decision making with structured observations" by allowing robust (i.e. multivalued) models. In this framework, each model associates each decision with a convex set of probability distributions over outcomes. Nature can choose distributions out of this set in an arbitrary (adversarial) manner, that can be nonoblivious and depend on past history. The resulting framework offers much greater generality than classical bandits and reinforcement learning, since the realizability assumption becomes much weaker and more realistic. We then derive a theory of regret bounds for this framework. Although our lower and upper bounds are not tight, they are sufficient to fully characterize power-law learnability. We demonstrate this theory in two special cases: robust linear bandits and tabular robust online reinforcement learning. In both cases [...]
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://arxiv.org/abs/2504.06820
Narrated by TYPE III AUDIO.
Diffractor is the first author of this paper.
Official title: "Regret Bounds for Robust Online Decision Making"
Abstract: We propose a framework which generalizes "decision making with structured observations" by allowing robust (i.e. multivalued) models. In this framework, each model associates each decision with a convex set of probability distributions over outcomes. Nature can choose distributions out of this set in an arbitrary (adversarial) manner, that can be nonoblivious and depend on past history. The resulting framework offers much greater generality than classical bandits and reinforcement learning, since the realizability assumption becomes much weaker and more realistic. We then derive a theory of regret bounds for this framework. Although our lower and upper bounds are not tight, they are sufficient to fully characterize power-law learnability. We demonstrate this theory in two special cases: robust linear bandits and tabular robust online reinforcement learning. In both cases [...]
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://arxiv.org/abs/2504.06820
Narrated by TYPE III AUDIO.
26,336 Listeners
2,384 Listeners
8,004 Listeners
4,125 Listeners
90 Listeners
1,494 Listeners
9,255 Listeners
91 Listeners
424 Listeners
5,448 Listeners
15,481 Listeners
505 Listeners
127 Listeners
71 Listeners
465 Listeners