Philosophical Disquisitions

109 - How Can We Align Language Models like GPT with Human Values?


Listen Later



In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.


Relevant Links
  • Atoosa's webpage
  • Atoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning Language Models with Human Values'



Subscribe to the newsletter
...more
View all episodesView all episodes
Download on the App Store

Philosophical DisquisitionsBy John Danaher

  • 4.5
  • 4.5
  • 4.5
  • 4.5
  • 4.5

4.5

20 ratings


More shows like Philosophical Disquisitions

View all
London Real by Brian Rose

London Real

1,202 Listeners