TL;DR
We are excited to announce the fourth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! ARENA's mission is to provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles. ARENA will be running in-person from LISA from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks).
Apply here before 23:59 July 20th anywhere on Earth!
Summary
ARENA has been successfully run three times, with alumni going on to become MATS scholars and LASR participants; AI safety engineers at Apollo Research, Anthropic, METR, and OpenAI; and even starting their own AI safety organisations!
This iteration will run from 2nd September - 4th October (the first week is an optional review of the fundamentals [...]
---
Outline:
(00:10) TL;DR
(00:51) Summary
(02:43) Outline of Content
(02:50) Chapter 0 - Fundamentals
(03:49) Chapter 1 - Transformers and Interpretability
(04:38) Chapter 2 - Reinforcement Learning
(05:07) Chapter 3 - Model Evaluation
(05:44) Chapter 4 - Capstone Project
(06:25) Staff
(06:44) FAQ
(06:47) Q: Who is this program suitable for?
(07:54) Q: What will an average day in this program look like?
(09:09) Q: How many participants will there be?
(09:20) Q: Will there be prerequisite materials?
(09:40) Q: When is the application deadline?
(09:53) Q: What will the application process look like?
(10:18) Q: Can I join for some sections but not others?
(10:50) Q: Will you pay stipends to participants?
(11:07) Q: Which costs will you be covering for the in-person programme?
(11:29) Q: Im interested in trialling some of the material or recommending material to be added. Is there a way I can do this?
(11:51) Link to Apply
---