
Sign up to save your podcasts
Or
In episode 37 of The Gradient Podcast, Andrey Kurenkov speaks to Laura Weidinger
Laura is a senior research scientist at DeepMind, with her focus being AI ethics. Laura is also a PhD candidate at the University of Cambridge, studying philosophy of science and specifically approaches to measuring the ethics of AI systems. Previously Laura worked in technology policy at UK and EU levels, as a Policy executive at techUK. She then pivoted to cognitive science research and studied human learning at the Max Planck Institute for Human Development in Berlin, and was a Guest Lecturer at the Ada National College for Digital Skills. She received her Master's degree at the Humboldt University of Berlin, from the School of Mind and Brain, with her focus being Neuroscience/ Philosophy/ Cognitive science.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
(00:00) Intro(01:20) Path to AI(04:25) Research in Cognitive Science(06:40) Interest in AI Ethics(14:30) Ethics Considerations for Researchers(17:38) Ethical and social risks of harm from language models (25:30) Taxonomy of Risks posed by Language Models(27:33) Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models(33:25) Main Insight for Measuring Harm(35:40) The EU AI Act(39:10) Alignment of language agents(46:10) GPT-4Chan(53:40) Interests outside of AI(55:30) Outro
Links:
Ethical and social risks of harm from language models
Taxonomy of Risks posed by Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Alignment of language agents
4.7
4747 ratings
In episode 37 of The Gradient Podcast, Andrey Kurenkov speaks to Laura Weidinger
Laura is a senior research scientist at DeepMind, with her focus being AI ethics. Laura is also a PhD candidate at the University of Cambridge, studying philosophy of science and specifically approaches to measuring the ethics of AI systems. Previously Laura worked in technology policy at UK and EU levels, as a Policy executive at techUK. She then pivoted to cognitive science research and studied human learning at the Max Planck Institute for Human Development in Berlin, and was a Guest Lecturer at the Ada National College for Digital Skills. She received her Master's degree at the Humboldt University of Berlin, from the School of Mind and Brain, with her focus being Neuroscience/ Philosophy/ Cognitive science.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
(00:00) Intro(01:20) Path to AI(04:25) Research in Cognitive Science(06:40) Interest in AI Ethics(14:30) Ethics Considerations for Researchers(17:38) Ethical and social risks of harm from language models (25:30) Taxonomy of Risks posed by Language Models(27:33) Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models(33:25) Main Insight for Measuring Harm(35:40) The EU AI Act(39:10) Alignment of language agents(46:10) GPT-4Chan(53:40) Interests outside of AI(55:30) Outro
Links:
Ethical and social risks of harm from language models
Taxonomy of Risks posed by Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Alignment of language agents
10,688 Listeners
323 Listeners
189 Listeners
1,260 Listeners
196 Listeners
287 Listeners
9,048 Listeners
87 Listeners
387 Listeners
5,420 Listeners
146 Listeners
15,207 Listeners
2,187 Listeners
75 Listeners
134 Listeners