
Sign up to save your podcasts
Or
tl;dr
This post is an update on the Proceedings of ILIAD, a conference journal for AI alignment research intended to bridge the gap between the Alignment Forum and academia. Following our successful first issue with 9 workshop papers from last year's ILIAD conference, we're launching a second issue in association with ILIAD 2: ODYSSEY. The conference is August 25-29, 2025 at Lighthaven in Berkeley, CA. Submissions to the Proceedings are open now and due June 25. Our goal is to support impactful, rapid, and readable research, carefully rationing scarce researcher time, using features like public submissions, partial anonymity, partial confidentiality, reviewer-written abstracts, reviewer compensation, and open licensing. We are soliciting community feedback and suggestions for reviewers and editorial board members.
Motivation
Prior to the deep learning explosion, much early work on AI alignment occurred at MIRI, the Alignment Forum, and LessWrong (and their predecessors). Although there is now vastly [...]
---
Outline:
(00:12) tl;dr
(01:07) Motivation
(03:30) Experience with first issue of Proceedings
(05:14) General philosophy
(08:01) Design of the second issue of the Proceedings
(13:48) Possible design for an alignment journal
(16:07) Asks for readers
(16:52) Acknowledgements
---
First published:
Source:
Narrated by TYPE III AUDIO.
tl;dr
This post is an update on the Proceedings of ILIAD, a conference journal for AI alignment research intended to bridge the gap between the Alignment Forum and academia. Following our successful first issue with 9 workshop papers from last year's ILIAD conference, we're launching a second issue in association with ILIAD 2: ODYSSEY. The conference is August 25-29, 2025 at Lighthaven in Berkeley, CA. Submissions to the Proceedings are open now and due June 25. Our goal is to support impactful, rapid, and readable research, carefully rationing scarce researcher time, using features like public submissions, partial anonymity, partial confidentiality, reviewer-written abstracts, reviewer compensation, and open licensing. We are soliciting community feedback and suggestions for reviewers and editorial board members.
Motivation
Prior to the deep learning explosion, much early work on AI alignment occurred at MIRI, the Alignment Forum, and LessWrong (and their predecessors). Although there is now vastly [...]
---
Outline:
(00:12) tl;dr
(01:07) Motivation
(03:30) Experience with first issue of Proceedings
(05:14) General philosophy
(08:01) Design of the second issue of the Proceedings
(13:48) Possible design for an alignment journal
(16:07) Asks for readers
(16:52) Acknowledgements
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,367 Listeners
2,397 Listeners
7,779 Listeners
4,103 Listeners
87 Listeners
1,442 Listeners
8,778 Listeners
89 Listeners
355 Listeners
5,370 Listeners
15,053 Listeners
460 Listeners
126 Listeners
64 Listeners
432 Listeners