Security Insights

AI, Testing and Red Teaming, with Peter Garraghan


Listen Later

Artificial intelligence is often described as a "black box". We can see what we put in, and what comes out. But not how the model comes to its results.

And, unlike conventional software, large language models are non-deterministic. The same inputs can produce different results.

This makes it hard to secure AI systems, and to assure their users that they are secure.

There is already growing evidence that malicious actors are using AI to find vulnerabilities, carry out reconnaissance, and fine-tune their attacks.

But the risks posed by AI systems themselves could be even greater.

Our guest this week has set out to secure AI, by developing red team testing methods that take into account both the nature of AI, and the unique risks it poses.

Peter Garraghan is professor at Lancaster University, and founder and CEO at Mindgard.

Interview by Stephen Pritchard

...more
View all episodesView all episodes
Download on the App Store

Security InsightsBy securityinsights


More shows like Security Insights

View all
Cybersecurity Today by Jim Love

Cybersecurity Today

181 Listeners