AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Explainable AI Concepts [AI Today Podcast]

03.15.2024 - By AI & Data TodayPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

The Explainable AI Layer of the Cognilytica Trustworthy AI Framework addresses the technical methods that go into understanding system behavior and make black boxes less so. In this episode of the AI Today podcast Cognilytica AI experts Ron Schmelzer and Kathleen Walch discuss the interpretable and explainable AI layer.

The Explainable AI Layer

Separate from the notion of transparency of AI systems is the concept of AI algorithms being able to explain how they arrived at particular decisions. The ability for AI algorithms to explain the exact cause-and-effect from input data to output result is known as AI algorithmic explainability. However, it is widely recognized that not many ML approaches are inherently explainable, in particular deep learning.

Relying on black box technology can be dangerous. Without understandability, we don’t have trust. To trust these systems, humans want accountability and explanation.

This episode goes into aking AI Understandable. This includes the main elements of the AI Explainability & Interpretability layer. We also review Terminology including Black Box and Explainable AI (XAI). And, we address the idea that Not all Algorithms are Explainable and what means in the context of Trustworthy AI.

Show Notes:

Free Intro to CPMAI course

CPMAI Certification

Subscribe to Cognilytica newsletter on LinkedIn

The Layers of Trustworthy AI

Free Intro to Trustworthy AI

Trustworthy AI Framework Training & Certification

AI Today Podcast: Trustworthy AI Series: Responsible AI

AI Today Podcast: Trustworthy AI Series: The Layers of Trustworthy AI

Trustworthy AI Series: Responsible AI Concepts [AI Today Podcast]

More episodes from AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion