AI can often make decisions that are difficult for humans to understand. I
n this episode, we'll discuss the importance of explainability and transparency in AI systems, how they can be achieved, what's the problem with biased training data and how does explainability help us overcome bias.
What is explainability and why is it significant in the context of AI?
How to increase reliance on AI-based systems for security applications using explainability?
How can we have AI transparency? How to use it to overcome bias?
Given bias won’t be gone, how to take into account its impact?
Examples of problematic biases
Do we have sufficient data to train AIs to recognize and classify APT (Advanced Persistent Threat) activities? What are the risks of overfitting and underfitting?Links mentioned in the show:
Acedemic paper evaluating the security of CoPilot from 12/2021
Can OS LLM detect bugs in code?