
Sign up to save your podcasts
Or
As part of the MATS Winter 2023-24 Program, scholars were invited to take part in a series of weekly discussion groups on AI safety strategy. Each strategy discussion focused on a specific crux we deemed relevant to prioritizing AI safety interventions and was accompanied by a reading list and suggested discussion questions. The discussion groups were faciliated by several MATS alumni and other AI safety community members and generally ran for 1-1.5 h.
As assessed by our alumni reviewers, scholars in our Summer 2024 Program were much better at writing concrete plans for their research than they were at explaining their research's theory of change. We think it is generally important for researchers, even those early in their career, to critically evaluate the impact of their work, to:
---
Outline:
(01:51) Week 1: How will AGI arise?
(03:49) Suggested discussion questions
(05:43) Week 2: Is the world vulnerable to AI?
(09:54) Suggested discussion questions
(12:08) Week 3: How hard is AI alignment?
(14:19) Suggested discussion questions
(16:16) Week 4: How should we prioritize AI safety research?
(19:23) Suggested discussion questions
(21:11) Week 5: What are AI labs doing?
(23:13) Suggested discussion questions
(25:13) Week 6: What governance measures reduce AI risk?
(27:42) Suggested discussion questions
(29:49) Week 7: What do positive futures look like?
(33:07) Suggested discussion questions
(34:37) Acknowledgements
---
First published:
Source:
Narrated by TYPE III AUDIO.
As part of the MATS Winter 2023-24 Program, scholars were invited to take part in a series of weekly discussion groups on AI safety strategy. Each strategy discussion focused on a specific crux we deemed relevant to prioritizing AI safety interventions and was accompanied by a reading list and suggested discussion questions. The discussion groups were faciliated by several MATS alumni and other AI safety community members and generally ran for 1-1.5 h.
As assessed by our alumni reviewers, scholars in our Summer 2024 Program were much better at writing concrete plans for their research than they were at explaining their research's theory of change. We think it is generally important for researchers, even those early in their career, to critically evaluate the impact of their work, to:
---
Outline:
(01:51) Week 1: How will AGI arise?
(03:49) Suggested discussion questions
(05:43) Week 2: Is the world vulnerable to AI?
(09:54) Suggested discussion questions
(12:08) Week 3: How hard is AI alignment?
(14:19) Suggested discussion questions
(16:16) Week 4: How should we prioritize AI safety research?
(19:23) Suggested discussion questions
(21:11) Week 5: What are AI labs doing?
(23:13) Suggested discussion questions
(25:13) Week 6: What governance measures reduce AI risk?
(27:42) Suggested discussion questions
(29:49) Week 7: What do positive futures look like?
(33:07) Suggested discussion questions
(34:37) Acknowledgements
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,438 Listeners
15,220 Listeners
474 Listeners
121 Listeners
75 Listeners
461 Listeners