
Sign up to save your podcasts
Or


Wide release date: June 25, 2025
Episode Summary: Dr. Marius Pachitariu discusses how the brain computes information across scales, from single neurons to complex networks, using mice to study visual learning. He explains the differences between supervised and unsupervised learning, the brain’s high-dimensional processing, and how it compares to artificial neural networks like large language models. The conversation also covers experimental techniques, such as Neuropixels probes and calcium imaging, and the role of reward prediction errors in learning.
About the guest: Marius Pachitariu, PhD is a group leader at the Janelia Research Campus, leading a lab focused on neuroscience with a blend of experimental and computational approaches.
Discussion Points:
* The brain operates at multiple scales, with single neurons acting as computational units and networks creating complex, high-dimensional computations.
* Pachitariu’s lab uses advanced tools like calcium imaging to record from tens of thousands of neurons simultaneously in mice.
* Unsupervised learning allows mice to form visual memories of environments without rewards, speeding up task learning later.
* Brain activity during sleep or anesthesia is highly correlated, unlike the high-dimensional, less predictable patterns during wakefulness.
* The brain expands sensory input dimensionality (e.g., from retina to visual cortex) to simplify complex computations, a principle also seen in artificial neural networks.
* Reward prediction errors, driven by dopamine, signal when expectations are violated, aiding learning by updating internal models.
* Large language models rely on self-supervised learning, predicting next words, but lack the forward-modeling reasoning humans excel at.
Related episode:
* M&M 44: Consciousness, Perception, Hallucinations, Selfhood, Neuroscience, Psychedelics & "Being You" | Anil Seth
*Not medical advice.
* Full audio version: [Apple] [Spotify] [Elsewhere]
* Full video version: [YouTube]
* Support M&M if you find value in this content.
* Episode transcript below.
Episode Chapters:
00:00:00 Intro00:05:25 Neural Computations & Scales00:13:30 Single Neuron Computations00:21:35 Network Dynamics & Complexity00:30:33 Recording Techniques & Tools00:39:32 Brain Efficiency & Metabolism00:47:30 Population Activity & Correlations00:56:10 High-Dimensional Brain Activity01:03:46 Supervised & Unsupervised Learning01:12:37 Experimental Paradigm & Mouse Behavior01:22:29 Visual Memory & Neural Changes01:34:58 LLMs & Brain Reasoning Comparison01:45:01 Closing Thoughts & Future Directions
Full AI-generated transcript below. Beware of typos & mistranslations!
By Nick JikomesWide release date: June 25, 2025
Episode Summary: Dr. Marius Pachitariu discusses how the brain computes information across scales, from single neurons to complex networks, using mice to study visual learning. He explains the differences between supervised and unsupervised learning, the brain’s high-dimensional processing, and how it compares to artificial neural networks like large language models. The conversation also covers experimental techniques, such as Neuropixels probes and calcium imaging, and the role of reward prediction errors in learning.
About the guest: Marius Pachitariu, PhD is a group leader at the Janelia Research Campus, leading a lab focused on neuroscience with a blend of experimental and computational approaches.
Discussion Points:
* The brain operates at multiple scales, with single neurons acting as computational units and networks creating complex, high-dimensional computations.
* Pachitariu’s lab uses advanced tools like calcium imaging to record from tens of thousands of neurons simultaneously in mice.
* Unsupervised learning allows mice to form visual memories of environments without rewards, speeding up task learning later.
* Brain activity during sleep or anesthesia is highly correlated, unlike the high-dimensional, less predictable patterns during wakefulness.
* The brain expands sensory input dimensionality (e.g., from retina to visual cortex) to simplify complex computations, a principle also seen in artificial neural networks.
* Reward prediction errors, driven by dopamine, signal when expectations are violated, aiding learning by updating internal models.
* Large language models rely on self-supervised learning, predicting next words, but lack the forward-modeling reasoning humans excel at.
Related episode:
* M&M 44: Consciousness, Perception, Hallucinations, Selfhood, Neuroscience, Psychedelics & "Being You" | Anil Seth
*Not medical advice.
* Full audio version: [Apple] [Spotify] [Elsewhere]
* Full video version: [YouTube]
* Support M&M if you find value in this content.
* Episode transcript below.
Episode Chapters:
00:00:00 Intro00:05:25 Neural Computations & Scales00:13:30 Single Neuron Computations00:21:35 Network Dynamics & Complexity00:30:33 Recording Techniques & Tools00:39:32 Brain Efficiency & Metabolism00:47:30 Population Activity & Correlations00:56:10 High-Dimensional Brain Activity01:03:46 Supervised & Unsupervised Learning01:12:37 Experimental Paradigm & Mouse Behavior01:22:29 Visual Memory & Neural Changes01:34:58 LLMs & Brain Reasoning Comparison01:45:01 Closing Thoughts & Future Directions
Full AI-generated transcript below. Beware of typos & mistranslations!