
Sign up to save your podcasts
Or
Article: https://www.aiblade.net/p/chatgpt-send-me-someones-calendar
OpenAI recently introduced GPTs to premium users, allowing people to interact with third-party web services via a Large Language Model. But is this safe when AI is so easy to trick?
In this post, I will present my novel research: exploiting a personal assistant GPT, causing it to unwittingly email the contents of someone’s calendar to an attacker. I will expand on the wider problems related to this vulnerability and discuss the future of similar exploits.
Article: https://www.aiblade.net/p/chatgpt-send-me-someones-calendar
OpenAI recently introduced GPTs to premium users, allowing people to interact with third-party web services via a Large Language Model. But is this safe when AI is so easy to trick?
In this post, I will present my novel research: exploiting a personal assistant GPT, causing it to unwittingly email the contents of someone’s calendar to an attacker. I will expand on the wider problems related to this vulnerability and discuss the future of similar exploits.