Working Humans

AI: Keeping Data Safe


Listen Later

Every impenetrable LLM can be jailbroken. And every

service agreement that guarantees your data, entered into a prompt window, will not be used to train future models can be broken, loopholed or hacked. Once you enter content into a Large Language Model, or post anything onto the web, it’s no longer yours. A guide to keeping data safe in the AI landscape.

...more
View all episodesView all episodes
Download on the App Store

Working HumansBy fiona passantino