
Sign up to save your podcasts
Or
Arvind Narayanan is a leading voice disambiguating what AI does and does not do. His work, with Sayash Kapoor at AI Snake Oil, is one of the few beacons of reasons in a AI media ecosystem with quite a few bad Apples. Arvind is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. You can learn more about Arvind and his work on his website, X, or Google Scholar.
This episode is all in on figuring out what current LLMs do and don’t do. We cover AGI, agents, scaling laws, autonomous scientists, and past failings of AI (i.e. those that came before generative AI took off). We also briefly touch on how all of this informs AI policy, and what academics can do to decide on what to work on to generate better outcomes for technology.
Transcript and full show notes: https://www.interconnects.ai/p/interviewing-arvind-narayanan
Chapters
* [00:00:00] Introduction
* [00:01:54] Balancing being an AI critic while recognizing AI's potential
* [00:04:57] Challenges in AI policy discussions
* [00:08:47] Open source foundation models and their risks
* [00:15:35] Personal use cases for generative AI
* [00:22:19] CORE-Bench and evaluating AI scientists
* [00:25:35] Agents and artificial general intelligence (AGI)
* [00:33:12] Scaling laws and AI progress
* [00:37:41] Applications of AI outside of tech
* [00:39:10] Career lessons in technology and AI research
* [00:41:33] Privacy concerns and AI
* [00:47:06] Legal threats and responsible research communication
* [00:50:01] Balancing scientific research and public distribution
Get Interconnects (https://www.interconnects.ai/podcast)...
... on YouTube: https://www.youtube.com/@interconnects
... on Twitter: https://x.com/interconnectsai
... on Linkedin: https://www.linkedin.com/company/interconnects-ai
... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv
4.1
99 ratings
Arvind Narayanan is a leading voice disambiguating what AI does and does not do. His work, with Sayash Kapoor at AI Snake Oil, is one of the few beacons of reasons in a AI media ecosystem with quite a few bad Apples. Arvind is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. You can learn more about Arvind and his work on his website, X, or Google Scholar.
This episode is all in on figuring out what current LLMs do and don’t do. We cover AGI, agents, scaling laws, autonomous scientists, and past failings of AI (i.e. those that came before generative AI took off). We also briefly touch on how all of this informs AI policy, and what academics can do to decide on what to work on to generate better outcomes for technology.
Transcript and full show notes: https://www.interconnects.ai/p/interviewing-arvind-narayanan
Chapters
* [00:00:00] Introduction
* [00:01:54] Balancing being an AI critic while recognizing AI's potential
* [00:04:57] Challenges in AI policy discussions
* [00:08:47] Open source foundation models and their risks
* [00:15:35] Personal use cases for generative AI
* [00:22:19] CORE-Bench and evaluating AI scientists
* [00:25:35] Agents and artificial general intelligence (AGI)
* [00:33:12] Scaling laws and AI progress
* [00:37:41] Applications of AI outside of tech
* [00:39:10] Career lessons in technology and AI research
* [00:41:33] Privacy concerns and AI
* [00:47:06] Legal threats and responsible research communication
* [00:50:01] Balancing scientific research and public distribution
Get Interconnects (https://www.interconnects.ai/podcast)...
... on YouTube: https://www.youtube.com/@interconnects
... on Twitter: https://x.com/interconnectsai
... on Linkedin: https://www.linkedin.com/company/interconnects-ai
... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv
1,003 Listeners
512 Listeners
270 Listeners
193 Listeners
199 Listeners
279 Listeners
88 Listeners
348 Listeners
123 Listeners
190 Listeners
62 Listeners
138 Listeners
445 Listeners
29 Listeners
31 Listeners