
Sign up to save your podcasts
Or
In this episode of Generative AI 101, we pop the hood on OpenAI's o1 model and explore what we know about the inner workings. We’ll break down its advanced "chain of thought" reasoning, its unique training methods like Reinforcement Learning from Human Feedback (RLHF), and the safety measures keeping it from going rogue. From AI boot camp to solving complex problems like a zen master, o1 takes its time to think through answers with precision.
Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about what's under the hood of OpenAI's new o1 Preview than you did before!
Connect with Emily Laird on LinkedIn
4.8
1212 ratings
In this episode of Generative AI 101, we pop the hood on OpenAI's o1 model and explore what we know about the inner workings. We’ll break down its advanced "chain of thought" reasoning, its unique training methods like Reinforcement Learning from Human Feedback (RLHF), and the safety measures keeping it from going rogue. From AI boot camp to solving complex problems like a zen master, o1 takes its time to think through answers with precision.
Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about what's under the hood of OpenAI's new o1 Preview than you did before!
Connect with Emily Laird on LinkedIn
294 Listeners
321 Listeners
147 Listeners
197 Listeners
274 Listeners
153 Listeners
126 Listeners
142 Listeners
193 Listeners
421 Listeners
232 Listeners
65 Listeners
28 Listeners
42 Listeners