
Sign up to save your podcasts
Or


Prefer reading instead? The full article is available here. The podcast is also available on Spotify and Apple Podcasts. Subscribe to keep up with the latest drops.
Most ML models answer one question: what is likely to happen? The harder question is what will change if you intervene. That gap is where causal reasoning begins.
In this episode, we explore how constraint-based algorithms learn causal structure directly from data, and how LLMs can step in to resolve what statistics alone cannot.
You’ll learn:
* How PC, FCI, and RFCI discover causal graphs using conditional independence tests, and what assumptions each one makes.
* How to encode domain knowledge as hard constraints, so the algorithm stops producing edges that are statistically plausible but practically nonsensical.
* How LLMs can review and refine the output graph, resolving ambiguous orientations with domain reasoning when the data runs out of signal.
By the end, you’ll have a clear picture of a three-layer pipeline that combines statistical discovery, expert constraints, and LLM review into a coherent approach to causal graph learning.
If you’d rather read than listen, the full article (with diagrams, code examples, and implementation details) is available on Substack:
👉 Enjoyed this episode? Subscribe to The AI Practitioner to get future articles and podcasts delivered straight to your inbox.
By by Lina FaikPrefer reading instead? The full article is available here. The podcast is also available on Spotify and Apple Podcasts. Subscribe to keep up with the latest drops.
Most ML models answer one question: what is likely to happen? The harder question is what will change if you intervene. That gap is where causal reasoning begins.
In this episode, we explore how constraint-based algorithms learn causal structure directly from data, and how LLMs can step in to resolve what statistics alone cannot.
You’ll learn:
* How PC, FCI, and RFCI discover causal graphs using conditional independence tests, and what assumptions each one makes.
* How to encode domain knowledge as hard constraints, so the algorithm stops producing edges that are statistically plausible but practically nonsensical.
* How LLMs can review and refine the output graph, resolving ambiguous orientations with domain reasoning when the data runs out of signal.
By the end, you’ll have a clear picture of a three-layer pipeline that combines statistical discovery, expert constraints, and LLM review into a coherent approach to causal graph learning.
If you’d rather read than listen, the full article (with diagrams, code examples, and implementation details) is available on Substack:
👉 Enjoyed this episode? Subscribe to The AI Practitioner to get future articles and podcasts delivered straight to your inbox.