
Sign up to save your podcasts
Or


When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.
By Reid Blackman4.9
5454 ratings
When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.

3,989 Listeners

9,745 Listeners

46 Listeners

30,238 Listeners

113,406 Listeners

57,066 Listeners

3,613 Listeners

3,372 Listeners

265 Listeners

5,548 Listeners

459 Listeners

16,423 Listeners

19 Listeners

3 Listeners

9,344 Listeners