
Sign up to save your podcasts
Or
Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.
The complete show notes for this episode can be found at twimlai.com/go/618.
4.7
412412 ratings
Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.
The complete show notes for this episode can be found at twimlai.com/go/618.
161 Listeners
474 Listeners
295 Listeners
321 Listeners
147 Listeners
196 Listeners
275 Listeners
90 Listeners
97 Listeners
104 Listeners
193 Listeners
64 Listeners
420 Listeners
28 Listeners
31 Listeners