
Sign up to save your podcasts
Or


Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.
The complete show notes for this episode can be found at twimlai.com/go/679.
By Sam Charrington4.7
419419 ratings
Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.
The complete show notes for this episode can be found at twimlai.com/go/679.

476 Listeners

1,087 Listeners

171 Listeners

303 Listeners

340 Listeners

212 Listeners

196 Listeners

90 Listeners

501 Listeners

130 Listeners

209 Listeners

562 Listeners

26 Listeners

34 Listeners

39 Listeners