
Sign up to save your podcasts
Or
Sign up to receive in your inbox: http://eepurl.com/i7RgRM
This week we note regular CVEs in AI libraries such as Nvidia TensorFlow and PyTorch. We discuss a novel prompt injection technique called "policy puppetry", along with malware dispersal through fake AI video generators and Meta's release of an open-source AI security tool set including Llama Firewall. We also covered Israel's experimental use of AI in warfare, Russia's AI-enabled drones in Ukraine, China's crackdown on AI misuse, Dreadnode's research on AI in red teaming, geolocation doxing via multimodal LLMs, safety research on autonomous vehicle attacks targeting inference time, Config Scan for analyzing malicious configurations on Hugging Face, Spotlight as a physical solution against deepfakes, and Reply Bench for benchmarking autonomous replication of LLM agents.
Sign up to receive in your inbox: http://eepurl.com/i7RgRM
This week we note regular CVEs in AI libraries such as Nvidia TensorFlow and PyTorch. We discuss a novel prompt injection technique called "policy puppetry", along with malware dispersal through fake AI video generators and Meta's release of an open-source AI security tool set including Llama Firewall. We also covered Israel's experimental use of AI in warfare, Russia's AI-enabled drones in Ukraine, China's crackdown on AI misuse, Dreadnode's research on AI in red teaming, geolocation doxing via multimodal LLMs, safety research on autonomous vehicle attacks targeting inference time, Config Scan for analyzing malicious configurations on Hugging Face, Spotlight as a physical solution against deepfakes, and Reply Bench for benchmarking autonomous replication of LLM agents.