
Sign up to save your podcasts
Or
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
4.8
923923 ratings
Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI’s GPT-3 and Google’s LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?
1,962 Listeners
363 Listeners
633 Listeners
372 Listeners
175 Listeners
1,005 Listeners
313 Listeners
388 Listeners
7,783 Listeners
187 Listeners
313 Listeners
72 Listeners
120 Listeners
33 Listeners
158 Listeners