Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
July 01, 2020Alignment Newsletter #106: Evaluating generalization ability of learned reward models22 minutesPlayRecorded by Robert Miles More information about the newsletter here...moreShareView all episodesBy Rohin Shah et al.555 ratingsJuly 01, 2020Alignment Newsletter #106: Evaluating generalization ability of learned reward models22 minutesPlayRecorded by Robert Miles More information about the newsletter here...more
July 01, 2020Alignment Newsletter #106: Evaluating generalization ability of learned reward models22 minutesPlayRecorded by Robert Miles More information about the newsletter here...more