
Sign up to save your podcasts
Or


AI isn’t just improving — it’s accelerating at a pace that’s hard to grasp.
In this episode, we go deeper into the mechanics behind modern AI systems. We unpack deep learning, explore how neural networks actually work, and break down one of the most important ideas in AI today: scaling laws — the reason why bigger models keep getting better.
We also discuss the speed of AI development, why progress feels exponential, and what this means for the future — including the growing gap between what AI can do and how well we understand it.
This episode covers:
This episode is based on concepts from the Introduction to AI Safety, Ethics, and Society course and textbook by Dan Hendrycks, developed by the Center for AI Safety. We do not own this material — we’re interpreting and discussing it to share insights with our audience, and we’ll be exploring more topics from this work in future episodes.
If you want to explore the original material:
https://www.aisafetybook.com/
As AI systems become more powerful, the questions become bigger — not just technical, but societal.
No scripts. No filters — just a real conversation.
Hosted by Dhirendra & Suman
Connect with us
Linktree: https://linktr.ee/lovelacetalk
Lovelace Talks — conversations on tech, society, policy, and the systems shaping our world.
By LovelaceAI isn’t just improving — it’s accelerating at a pace that’s hard to grasp.
In this episode, we go deeper into the mechanics behind modern AI systems. We unpack deep learning, explore how neural networks actually work, and break down one of the most important ideas in AI today: scaling laws — the reason why bigger models keep getting better.
We also discuss the speed of AI development, why progress feels exponential, and what this means for the future — including the growing gap between what AI can do and how well we understand it.
This episode covers:
This episode is based on concepts from the Introduction to AI Safety, Ethics, and Society course and textbook by Dan Hendrycks, developed by the Center for AI Safety. We do not own this material — we’re interpreting and discussing it to share insights with our audience, and we’ll be exploring more topics from this work in future episodes.
If you want to explore the original material:
https://www.aisafetybook.com/
As AI systems become more powerful, the questions become bigger — not just technical, but societal.
No scripts. No filters — just a real conversation.
Hosted by Dhirendra & Suman
Connect with us
Linktree: https://linktr.ee/lovelacetalk
Lovelace Talks — conversations on tech, society, policy, and the systems shaping our world.