
Sign up to save your podcasts
Or


Anthropic just gave us something wild — a tool that lets you see inside an AI’s brain. You can actually trace how a model makes decisions, step by step. It’s called circuit tracing. This might be the beginning of editable reasoning in LLMs.
We’ll talk about:
Keywords:
Anthropic, circuit tracing, attribution graphs, DeepSeek R1-0528, Claude 3.7, Google AI fail, Gemini, GAIA AI, AI interpretability, AI reasoning, foundation models, AI transparency, interactive video AI, Grammarly funding, AI browser, OpenAI vs creators, AI Napster moment
Links:
Our Socials:
By AIFire.co2.4
55 ratings
Anthropic just gave us something wild — a tool that lets you see inside an AI’s brain. You can actually trace how a model makes decisions, step by step. It’s called circuit tracing. This might be the beginning of editable reasoning in LLMs.
We’ll talk about:
Keywords:
Anthropic, circuit tracing, attribution graphs, DeepSeek R1-0528, Claude 3.7, Google AI fail, Gemini, GAIA AI, AI interpretability, AI reasoning, foundation models, AI transparency, interactive video AI, Grammarly funding, AI browser, OpenAI vs creators, AI Napster moment
Links:
Our Socials:

16,198 Listeners

113,521 Listeners

10,318 Listeners

203 Listeners

644 Listeners

105 Listeners

5 Listeners

0 Listeners