
Sign up to save your podcasts
Or


At first, prompting seemed to be a temporary workaround for getting the most out of large language models. But over time, it's become critical to the way we interact with AI.
On the Lightcone, Garry, Harj, Diana, and Jared break down what they've learned from working with hundreds of founders building with LLMs: why prompting still matters, where it breaks down, and how teams are making it more reliable in production.
They share real examples of prompts that failed, how companies are testing for quality, and what the best teams are doing to make LLM outputs useful and predictable.
The prompt from Parahelp (S24) discussed in the episode: https://parahelp.com/blog/prompt-design
By Y Combinator4.3
2020 ratings
At first, prompting seemed to be a temporary workaround for getting the most out of large language models. But over time, it's become critical to the way we interact with AI.
On the Lightcone, Garry, Harj, Diana, and Jared break down what they've learned from working with hundreds of founders building with LLMs: why prompting still matters, where it breaks down, and how teams are making it more reliable in production.
They share real examples of prompts that failed, how companies are testing for quality, and what the best teams are doing to make LLM outputs useful and predictable.
The prompt from Parahelp (S24) discussed in the episode: https://parahelp.com/blog/prompt-design

1,285 Listeners

537 Listeners

703 Listeners

1,084 Listeners

227 Listeners

508 Listeners

205 Listeners

136 Listeners

607 Listeners

141 Listeners

473 Listeners

35 Listeners

39 Listeners

43 Listeners

49 Listeners