
Sign up to save your podcasts
Or


The explosion of large language models (LLMs) into the public sphere in 2023 here in Washington has raised many questions on how much artificial intelligence (AI) should be under the direct scrutiny of the government. Should we proceed with as much caution as Europe? Will AI as we know it today become misaligned with our interests? AI should lead us towards the next economic boom, but will the involvement of the government hasten or inhibit that?
To sift through some of these deeper policy questions, Shane spoke with Rob Reich about his work in philosophy, politics, and technology.
Rob Reich is a Professor of Political Science at Stanford University. He is also the faculty co-director of Stanford's Center on Philanthropy and Civil Society (PACS), the faculty director of the McCoy Center for Ethics in Society, and the associate director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Rob discusses the merits (and limitations) of the precautionary principle and other points from his book, System Error: Where Big Tech Went Wrong and How We Can Reboot. Shane reiterates how stifling innovation can lead to worse outcomes than expected but that thoughtlessness on AI is a mistake just as well.
Tune in as Shane and Rob examine the circuitry of America’s AI moment.
By AEI Podcasts5
1818 ratings
The explosion of large language models (LLMs) into the public sphere in 2023 here in Washington has raised many questions on how much artificial intelligence (AI) should be under the direct scrutiny of the government. Should we proceed with as much caution as Europe? Will AI as we know it today become misaligned with our interests? AI should lead us towards the next economic boom, but will the involvement of the government hasten or inhibit that?
To sift through some of these deeper policy questions, Shane spoke with Rob Reich about his work in philosophy, politics, and technology.
Rob Reich is a Professor of Political Science at Stanford University. He is also the faculty co-director of Stanford's Center on Philanthropy and Civil Society (PACS), the faculty director of the McCoy Center for Ethics in Society, and the associate director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Rob discusses the merits (and limitations) of the precautionary principle and other points from his book, System Error: Where Big Tech Went Wrong and How We Can Reboot. Shane reiterates how stifling innovation can lead to worse outcomes than expected but that thoughtlessness on AI is a mistake just as well.
Tune in as Shane and Rob examine the circuitry of America’s AI moment.

21,942 Listeners

78,416 Listeners

30,666 Listeners

26,224 Listeners

2,837 Listeners

4,335 Listeners

210 Listeners

126 Listeners

4,870 Listeners

112,200 Listeners

56,496 Listeners

9,518 Listeners

637 Listeners

6,081 Listeners

17 Listeners

41 Listeners

18 Listeners

9,925 Listeners

28 Listeners

21 Listeners

716 Listeners

1,664 Listeners

37 Listeners