
Sign up to save your podcasts
Or


Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
By Malicious Life4.8
929929 ratings
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?

187 Listeners

2,002 Listeners

371 Listeners

376 Listeners

638 Listeners

1,021 Listeners

321 Listeners

414 Listeners

8,011 Listeners

177 Listeners

314 Listeners

189 Listeners

74 Listeners

136 Listeners

171 Listeners