
Sign up to save your podcasts
Or
Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.
The complete show notes for this episode can be found at twimlai.com/go/652.
4.7
416416 ratings
Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.
The complete show notes for this episode can be found at twimlai.com/go/652.
159 Listeners
476 Listeners
298 Listeners
340 Listeners
151 Listeners
183 Listeners
298 Listeners
91 Listeners
425 Listeners
128 Listeners
201 Listeners
72 Listeners
496 Listeners
31 Listeners
43 Listeners