LessWrong (30+ Karma)

“MATS AI Safety Strategy Curriculum v2” by DanielFilan, Ryan Kidd


Listen Later

As part of our Summer 2024 Program, MATS ran a series of discussion groups focused on questions and topics we believe are relevant to prioritizing research into AI safety. Each weekly session focused on one overarching question, and was accompanied by readings and suggested discussion questions. The purpose of running these discussions was to increase scholars’ knowledge about the AI safety ecosystem and models of how AI could cause a catastrophe, and hone scholars’ ability to think critically about threat models—ultimately, in service of helping scholars become excellent researchers.

The readings and questions were largely based on the curriculum from the Winter 2023-24 Program, with two changes:

  • We reduced the number of weeks, since in the previous cohort scholars found it harder to devote time to discussion groups later in the program.
  • For each week we selected a small number of “core readings”, since many scholars were unable [...]

---

Outline:

(01:57) Week 1: How powerful is intelligence?

(02:03) Core readings

(02:37) Other readings

(03:34) Discussion questions

(04:50) Week 2: How and when will transformative AI be made?

(04:56) Core readings

(07:06) Other readings

(09:23) Discussion questions

(10:30) Week 3: How could we train AIs whose outputs we can’t evaluate?

(10:37) Core readings

(12:00) Other readings

(15:01) Discussion questions

(16:01) Week 4: Will AIs fake alignment?

(16:06) Core readings

(16:25) Other readings

(16:28) On inner and outer alignment

(17:15) On reasons to think deceptive alignment is likely

(18:07) Discussion questions

(19:25) Week 5: How should AI be governed?

(19:31) Core readings

(20:49) Other readings

(24:10) Discussion questions

(25:32) Readings that did not fit into any specific week

(26:26) Acknowledgements

---

First published:

October 7th, 2024

Source:

https://www.lesswrong.com/posts/rhEXTkDmssrHBNrfS/mats-ai-safety-strategy-curriculum-v2

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,370 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,386 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,925 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,134 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,456 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,048 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

87 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

387 Listeners

Hard Fork by The New York Times

Hard Fork

5,420 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,207 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

472 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

120 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

456 Listeners