The Nonlinear Library: Alignment Forum

AF - UC Berkeley course on LLMs and ML Safety by Dan H


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UC Berkeley course on LLMs and ML Safety, published by Dan H on July 9, 2024 on The AI Alignment Forum.
The UC Berkeley course I co-taught now has lecture videos available: https://www.youtube.com/playlist?list=PLJ66BAXN6D8H_gRQJGjmbnS5qCWoxJNfe
Course site: Understanding LLMs: Foundations and Safety
Unrelatedly, a more conceptual AI safety course has its content available at https://www.aisafetybook.com/
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

8 Listeners