
Sign up to save your podcasts
Or


Make a testable claim.
Test it through observation and experiment.
Update the claim to match reality.
The common-sense notion that we should assess claims about the world through observation appears in texts dating back to ancient civilizations. But it was the Scientific Revolution of the 16th and 17th centuries that formalized and popularized what we now call the scientific method.
We are all steeped in this paradigm. And, at first glance, it seems entirely reasonable to expect that AI policy be firmly grounded in evidence. Many proponents of this view argue there simply isn’t enough proof to justify alarm over various AI risks: from AI systems perpetuating bias and discrimination, to enabling the development of biological weapons by malicious actors. Precautionary measures, they claim, are often premature, overly restrictive, and even unscientific or unethical — likely to stifle innovation and block the benefits AI could bring to society.
Yet [...]
---
Outline:
(01:29) Black Boxes and Benchmarks
(05:02) Addressing Corporate Incentives
(09:21) The Path to Better Evidence
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By Center for AI SafetyMake a testable claim.
Test it through observation and experiment.
Update the claim to match reality.
The common-sense notion that we should assess claims about the world through observation appears in texts dating back to ancient civilizations. But it was the Scientific Revolution of the 16th and 17th centuries that formalized and popularized what we now call the scientific method.
We are all steeped in this paradigm. And, at first glance, it seems entirely reasonable to expect that AI policy be firmly grounded in evidence. Many proponents of this view argue there simply isn’t enough proof to justify alarm over various AI risks: from AI systems perpetuating bias and discrimination, to enabling the development of biological weapons by malicious actors. Precautionary measures, they claim, are often premature, overly restrictive, and even unscientific or unethical — likely to stifle innovation and block the benefits AI could bring to society.
Yet [...]
---
Outline:
(01:29) Black Boxes and Benchmarks
(05:02) Addressing Corporate Incentives
(09:21) The Path to Better Evidence
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.