
Sign up to save your podcasts
Or


Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations.
Chapters
AI Ethics & Philosophy
AI Model Bias, Failures, and Impacts
AI Mental Health & Safety Concerns
Guidelines, Governance, and Censorship
By Jacob Haimes and Igor KrawczukJacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations.
Chapters
AI Ethics & Philosophy
AI Model Bias, Failures, and Impacts
AI Mental Health & Safety Concerns
Guidelines, Governance, and Censorship