
Sign up to save your podcasts
Or
Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.
The complete show notes for this episode can be found at twimlai.com/go/652.
4.7
414414 ratings
Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.
The complete show notes for this episode can be found at twimlai.com/go/652.
161 Listeners
481 Listeners
299 Listeners
323 Listeners
147 Listeners
265 Listeners
189 Listeners
290 Listeners
88 Listeners
122 Listeners
197 Listeners
76 Listeners
442 Listeners
30 Listeners
36 Listeners