
Sign up to save your podcasts
Or


Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
By Malicious Life4.8
930930 ratings
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?

187 Listeners

372 Listeners

371 Listeners

651 Listeners

1,028 Listeners

317 Listeners

418 Listeners

8,077 Listeners

175 Listeners

315 Listeners

195 Listeners

73 Listeners

139 Listeners

45 Listeners

168 Listeners