Unaligned with Robert Scoble

#15: Digging into explainable AI


Listen Later

Finally an AI model that can tell you why it gives you the answers it does!!


Angelo Dalli is building a new kind of AI that fixes the problems of current large language models. Existing models generate errors, AKA “hallucinations” but can’t tell you why.


His AI, built using neurosymbolic techniques, will be able to get rid of these errors, but even better, explain why it makes the decisions it does.


Here he talks to me about the state of the art of current AIs and where we are going.


Sponsored by AI Top Tools: www.aitoptools.com

...more
View all episodesView all episodes
Download on the App Store

Unaligned with Robert ScobleBy Robert Scoble