The Nonlinear Library: Alignment Forum

AF - Virtual AI Safety Unconference 2024 by Orpheus Lummis


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Virtual AI Safety Unconference 2024, published by Orpheus Lummis on March 13, 2024 on The AI Alignment Forum.
When: May 23rd to May 26th 2024
Where: Online, participate from anywhere.
VAISU is a collaborative and inclusive event for AI safety researchers, aiming to facilitate collaboration, understanding, and progress towards problems of AI risk. It will feature talks, research discussions, and activities around the question: "How do we ensure the safety of AI systems, in the short and long term?". This includes topics such as alignment, corrigibility, interpretability, cooperativeness, understanding humans and human value structures, AI governance, strategy, …
Engage with the community: Apply to participate, give a talk, or propose a session. Come to share your insights, discuss, and collaborate on subjects that matter to you and the field.
Visit vaisu.ai to apply and to read further.
VAISU team
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners