
Sign up to save your podcasts
Or


In this episode of Generative AI 101, we pop the hood on OpenAI's o1 model and explore what we know about the inner workings. We’ll break down its advanced "chain of thought" reasoning, its unique training methods like Reinforcement Learning from Human Feedback (RLHF), and the safety measures keeping it from going rogue. From AI boot camp to solving complex problems like a zen master, o1 takes its time to think through answers with precision.
Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about what's under the hood of OpenAI's new o1 Preview than you did before!
Connect with Emily Laird on LinkedIn
By Emily Laird4.6
2020 ratings
In this episode of Generative AI 101, we pop the hood on OpenAI's o1 model and explore what we know about the inner workings. We’ll break down its advanced "chain of thought" reasoning, its unique training methods like Reinforcement Learning from Human Feedback (RLHF), and the safety measures keeping it from going rogue. From AI boot camp to solving complex problems like a zen master, o1 takes its time to think through answers with precision.
Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about what's under the hood of OpenAI's new o1 Preview than you did before!
Connect with Emily Laird on LinkedIn

32,246 Listeners

536 Listeners

1,649 Listeners

56,944 Listeners

8,876 Listeners

175 Listeners

212 Listeners

27,584 Listeners

5,109 Listeners

10,254 Listeners

16,525 Listeners

1,788 Listeners

688 Listeners

112 Listeners

0 Listeners