
Sign up to save your podcasts
Or


When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.
By Reid Blackman4.9
5454 ratings
When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.

4,012 Listeners

9,553 Listeners

46 Listeners

30,215 Listeners

112,484 Listeners

56,740 Listeners

3,571 Listeners

3,318 Listeners

263 Listeners

5,527 Listeners

444 Listeners

16,022 Listeners

19 Listeners

3 Listeners

9,416 Listeners