
Sign up to save your podcasts
Or


A couple of months ago, I joined the Canadian government’s AI strategy task force. Out of thirty members, I was one of only four focused on safety. Everyone else was there to talk growth. It reflects a pattern playing out all over the world: we’re going all in on AI, and regulation will only slow us down.
It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen.
The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?
Mentioned:
Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, by the White House Office of Science and Technology Policy
The mirage of AI deregulation, by Alondra Nelson (Science)
International AI Safety Report 2026, by Yoshua Bengio et al
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By The Globe and Mail4.5
1111 ratings
A couple of months ago, I joined the Canadian government’s AI strategy task force. Out of thirty members, I was one of only four focused on safety. Everyone else was there to talk growth. It reflects a pattern playing out all over the world: we’re going all in on AI, and regulation will only slow us down.
It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen.
The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?
Mentioned:
Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, by the White House Office of Science and Technology Policy
The mirage of AI deregulation, by Alondra Nelson (Science)
International AI Safety Report 2026, by Yoshua Bengio et al
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

391 Listeners

218 Listeners

216 Listeners

73 Listeners

66 Listeners

13 Listeners

5 Listeners

112 Listeners

87 Listeners

1 Listeners

1 Listeners

3 Listeners

452 Listeners

44 Listeners

1,625 Listeners

29 Listeners

264 Listeners

28 Listeners

22 Listeners

0 Listeners

0 Listeners

115 Listeners

13 Listeners

291 Listeners

3 Listeners

4 Listeners

0 Listeners

1 Listeners

25 Listeners

29 Listeners