
Sign up to save your podcasts
Or


Anthropic researchers investigated the internal workings of their Claude 3.5 Haiku large language model using a technique called circuit tracing. This method allows them to identify and map connections between "features," which they hypothesise are the basic units of computation within the model, akin to cells in biological systems. Their study explored a range of capabilities, such as multi-step reasoning, poetry planning, multilingual processing, and even detecting hidden goals. By analysing these internal mechanisms, the researchers gained insights into how the model performs various tasks, including instances of faithful and unfaithful chain-of-thought reasoning and its ability to refuse harmful requests. The findings highlight the complex and often abstract nature of computation within the model, revealing parallel processing, generalisable abstractions, and even forms of internal "planning." This work aims to advance the understanding of AI interpretability by providing detailed case studies and a methodology for examining the biological underpinnings of a powerful language model.
By Benjamin Alloul 🗪 🅽🅾🆃🅴🅱🅾🅾🅺🅻🅼Anthropic researchers investigated the internal workings of their Claude 3.5 Haiku large language model using a technique called circuit tracing. This method allows them to identify and map connections between "features," which they hypothesise are the basic units of computation within the model, akin to cells in biological systems. Their study explored a range of capabilities, such as multi-step reasoning, poetry planning, multilingual processing, and even detecting hidden goals. By analysing these internal mechanisms, the researchers gained insights into how the model performs various tasks, including instances of faithful and unfaithful chain-of-thought reasoning and its ability to refuse harmful requests. The findings highlight the complex and often abstract nature of computation within the model, revealing parallel processing, generalisable abstractions, and even forms of internal "planning." This work aims to advance the understanding of AI interpretability by providing detailed case studies and a methodology for examining the biological underpinnings of a powerful language model.