Explaining Explainability – the podcast on Explainable Artificial Intelligence.

Episode 4 "Co-Construction"


Listen Later

As TRR 318 understands it, explanations are not delivered in a one-way process - they are co-constructed, by the explainer and the explainee. And it seems that Large Language Models (LLMs) are pretty good at doing just that - but do they really co-construct? And what is co-construction? Prof. Britta Wrede discusses these questions with two experts on LLMs, Prof. Axel Ngonga Ngomo from Paderborn University and Prof. Henning Wachsmuth from Leibniz Universität Hannover.
...more
View all episodesView all episodes
Download on the App Store

Explaining Explainability – the podcast on Explainable Artificial Intelligence.By TRR 318