
Sign up to save your podcasts
Or


Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
By Malicious Life4.8
930930 ratings
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?

188 Listeners

370 Listeners

373 Listeners

650 Listeners

1,028 Listeners

320 Listeners

417 Listeners

8,103 Listeners

176 Listeners

314 Listeners

193 Listeners

73 Listeners

139 Listeners

45 Listeners

167 Listeners