
Sign up to save your podcasts
Or


Watch on Youtube
In this session, Buck is discussing how he thinks we should try to align artificial general intelligence (AGI) if we made no more fundamental progress on alignment, and then talks about how he thinks alignment researchers should try to improve this plan and ensure that whatever plans are available are executed competently.
By Aaron BergmanWatch on Youtube
In this session, Buck is discussing how he thinks we should try to align artificial general intelligence (AGI) if we made no more fundamental progress on alignment, and then talks about how he thinks alignment researchers should try to improve this plan and ensure that whatever plans are available are executed competently.