
Sign up to save your podcasts
Or


Applications are open for the ML Alignment & Theory Scholars (MATS) Summer 2025 Program, running Jun 16-Aug 22, 2025. First-stage applications are due Apr 18!
MATS is a twice-yearly, 10-week AI safety research fellowship program operating in Berkeley, California, with an optional 6-12 month extension program for select participants. Scholars are supported with a research stipend, shared office space, seminar program, support staff, accommodation, travel reimbursement, and computing resources. Our mentors come from a variety of organizations, including Anthropic, Google DeepMind, OpenAI, Redwood Research, GovAI, UK AI Security Institute, RAND TASP, UC Berkeley CHAI, Apollo Research, AI Futures Project, and more! Our alumni have been hired by top AI safety teams (e.g., at Anthropic, GDM, UK AISI, METR, Redwood, Apollo), founded research groups (e.g., Apollo, Timaeus, CAIP, Leap Labs), and maintain a dedicated support network for new researchers.
If you know anyone who you think would be interested in [...]
---
Outline:
(01:41) Program details
(02:54) Applications (now!)
(03:12) Research phase (Jun 16-Aug 22)
(04:14) Research milestones
(04:44) Community at MATS
(05:41) Extension phase
(06:07) Post-MATS
(07:44) Who should apply?
(08:45) Applying from outside the US
(09:27) How to apply
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongApplications are open for the ML Alignment & Theory Scholars (MATS) Summer 2025 Program, running Jun 16-Aug 22, 2025. First-stage applications are due Apr 18!
MATS is a twice-yearly, 10-week AI safety research fellowship program operating in Berkeley, California, with an optional 6-12 month extension program for select participants. Scholars are supported with a research stipend, shared office space, seminar program, support staff, accommodation, travel reimbursement, and computing resources. Our mentors come from a variety of organizations, including Anthropic, Google DeepMind, OpenAI, Redwood Research, GovAI, UK AI Security Institute, RAND TASP, UC Berkeley CHAI, Apollo Research, AI Futures Project, and more! Our alumni have been hired by top AI safety teams (e.g., at Anthropic, GDM, UK AISI, METR, Redwood, Apollo), founded research groups (e.g., Apollo, Timaeus, CAIP, Leap Labs), and maintain a dedicated support network for new researchers.
If you know anyone who you think would be interested in [...]
---
Outline:
(01:41) Program details
(02:54) Applications (now!)
(03:12) Research phase (Jun 16-Aug 22)
(04:14) Research milestones
(04:44) Community at MATS
(05:41) Extension phase
(06:07) Post-MATS
(07:44) Who should apply?
(08:45) Applying from outside the US
(09:27) How to apply
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,364 Listeners

2,438 Listeners

9,079 Listeners

4,153 Listeners

92 Listeners

1,596 Listeners

9,909 Listeners

90 Listeners

505 Listeners

5,469 Listeners

16,081 Listeners

541 Listeners

131 Listeners

95 Listeners

521 Listeners