Marketing^AI

Causal Discovery in AI: Internal vs. External Explanations


Listen Later

We explore two distinct approaches to explainable AI (XAI): internalist (mechanistic) and externalist (phenomenological). Attention-Based Causal Discovery (ABCD), representing the internalist view, focuses on understanding a specific model's internal computational logic by analyzing its self-attention mechanisms to uncover learned, often non-obvious, dependencies. Conversely, Prompt-Based Large Language Model (LLM) Reasoning, the externalist approach, treats LLMs as knowledge repositories to generate plausible causal hypotheses about real-world phenomena, relying on generalized patterns rather than a model's specific internal states. A comparative case study involving movie recommendations illustrates how ABCD can explain a model's unique learned behavior, which LLMs, despite their vast knowledge, cannot, as they are designed to explain the world, not another model's specific reasoning. Ultimately, the source argues that these methods are complementary, not competing, and suggests integrating them for more robust and trustworthy AI explanations, where ABCD diagnoses the model and LLMs translate these technical insights for human understanding.

...more
View all episodesView all episodes
Download on the App Store

Marketing^AIBy Enoch H. Kang