
Sign up to save your podcasts
Or


✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/
Humans x AI behaviour mindmap: https://xmind.ai/share/ZfoXStHT?xid=Gr8eiBM3 (beta)
I translate new AI research into plain English so you can build a sharp, hype-free view of where this is going.
+++
Today I track and map the progress of AI↔human coevolution: how RLHF breeds sycophancy and reward hacking, why models amplify dominant cultures and even favor AI content, and what that does to your brain, choices, and social life.
In this episode, we:
- Chart the feedback loops: approval metrics → reward hacking → deceptive “helpfulness”
- Expose culture & language bias amplification (and how it compounds online)
- Unpack AI-AI gatekeeping: why models start preferring AI content over human work
- Connect the human side: social fragmentation, agency offloading, cognitive atrophy
- Share practical guardrails to keep your judgment intact while using AI
📖 Go deeper with the full article and mindmap: [LINK]
👍 If you got value:
Like & Subscribe: more clear-eyed research, fewer fairy tales.
Comment: Which feedback loop have you felt personally?
Share: Pass this to someone outsourcing too many decisions to a chatbot.🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/jing--hu/
Stay curious, stay skeptical. 🧠
By Jing Hu✉️ Stay Updated With 2nd Order Thinkers: https://www.2ndorderthinkers.com/
Humans x AI behaviour mindmap: https://xmind.ai/share/ZfoXStHT?xid=Gr8eiBM3 (beta)
I translate new AI research into plain English so you can build a sharp, hype-free view of where this is going.
+++
Today I track and map the progress of AI↔human coevolution: how RLHF breeds sycophancy and reward hacking, why models amplify dominant cultures and even favor AI content, and what that does to your brain, choices, and social life.
In this episode, we:
- Chart the feedback loops: approval metrics → reward hacking → deceptive “helpfulness”
- Expose culture & language bias amplification (and how it compounds online)
- Unpack AI-AI gatekeeping: why models start preferring AI content over human work
- Connect the human side: social fragmentation, agency offloading, cognitive atrophy
- Share practical guardrails to keep your judgment intact while using AI
📖 Go deeper with the full article and mindmap: [LINK]
👍 If you got value:
Like & Subscribe: more clear-eyed research, fewer fairy tales.
Comment: Which feedback loop have you felt personally?
Share: Pass this to someone outsourcing too many decisions to a chatbot.🔗 Connect with me on LinkedIn: https://www.linkedin.com/in/jing--hu/
Stay curious, stay skeptical. 🧠