Super Prompt: The Generative AI Podcast

Power and Responsibility of Large Language Models | Safety & Ethics | OpenAI Model Spec + RLHF | Anthropic Constitutional AI | Episode 27


Listen Later

With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness.

REFERENCE

OpenAI Model Spec

https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

Anthropic Constitutional AI

https://www.anthropic.com/news/claudes-constitution



For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.

...more
View all episodesView all episodes
Download on the App Store

Super Prompt: The Generative AI PodcastBy Tony Wan

  • 5
  • 5
  • 5
  • 5
  • 5

5

16 ratings


More shows like Super Prompt: The Generative AI Podcast

View all
The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

429 Listeners

Generative AI 101 by Emily Laird

Generative AI 101

14 Listeners

AI Agents: Top Trend of 2025 - by AIAgentStore.ai by AIAgentStore.ai

AI Agents: Top Trend of 2025 - by AIAgentStore.ai

0 Listeners