
Sign up to save your podcasts
Or


Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
By Malicious Life4.8
930930 ratings
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?

186 Listeners

372 Listeners

372 Listeners

651 Listeners

1,020 Listeners

319 Listeners

416 Listeners

8,059 Listeners

179 Listeners

314 Listeners

189 Listeners

74 Listeners

139 Listeners

44 Listeners

169 Listeners