
Sign up to save your podcasts
Or


tl;dr
This post is an update on the Proceedings of ILIAD, a conference journal for AI alignment research intended to bridge the gap between the Alignment Forum and academia. Following our successful first issue with 9 workshop papers from last year's ILIAD conference, we're launching a second issue in association with ILIAD 2: ODYSSEY. The conference is August 25-29, 2025 at Lighthaven in Berkeley, CA. Submissions to the Proceedings are open now and due June 25. Our goal is to support impactful, rapid, and readable research, carefully rationing scarce researcher time, using features like public submissions, partial anonymity, partial confidentiality, reviewer-written abstracts, reviewer compensation, and open licensing. We are soliciting community feedback and suggestions for reviewers and editorial board members.
Motivation
Prior to the deep learning explosion, much early work on AI alignment occurred at MIRI, the Alignment Forum, and LessWrong (and their predecessors). Although there is now vastly [...]
---
Outline:
(00:12) tl;dr
(01:07) Motivation
(03:30) Experience with first issue of Proceedings
(05:14) General philosophy
(08:01) Design of the second issue of the Proceedings
(13:48) Possible design for an alignment journal
(16:07) Asks for readers
(16:52) Acknowledgements
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongtl;dr
This post is an update on the Proceedings of ILIAD, a conference journal for AI alignment research intended to bridge the gap between the Alignment Forum and academia. Following our successful first issue with 9 workshop papers from last year's ILIAD conference, we're launching a second issue in association with ILIAD 2: ODYSSEY. The conference is August 25-29, 2025 at Lighthaven in Berkeley, CA. Submissions to the Proceedings are open now and due June 25. Our goal is to support impactful, rapid, and readable research, carefully rationing scarce researcher time, using features like public submissions, partial anonymity, partial confidentiality, reviewer-written abstracts, reviewer compensation, and open licensing. We are soliciting community feedback and suggestions for reviewers and editorial board members.
Motivation
Prior to the deep learning explosion, much early work on AI alignment occurred at MIRI, the Alignment Forum, and LessWrong (and their predecessors). Although there is now vastly [...]
---
Outline:
(00:12) tl;dr
(01:07) Motivation
(03:30) Experience with first issue of Proceedings
(05:14) General philosophy
(08:01) Design of the second issue of the Proceedings
(13:48) Possible design for an alignment journal
(16:07) Asks for readers
(16:52) Acknowledgements
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,366 Listeners

2,438 Listeners

8,995 Listeners

4,150 Listeners

92 Listeners

1,597 Listeners

9,913 Listeners

90 Listeners

71 Listeners

5,471 Listeners

16,096 Listeners

536 Listeners

131 Listeners

95 Listeners

519 Listeners