Share Theoretical Neuroscience Podcast
Share to email
Share to Facebook
Share to X
By Gaute Einevoll
5
33 ratings
The podcast currently has 18 episodes available.
Most of what we have learned about the functioning of the living brain has come from extracellular electrical recordings, like the measurement of spikes, LFP, ECoG and EEG signals.
And most analysis of these recordings has been statistical, looking for correlations between the recorded signals and what the animal/human is doing or being exposed to.
However, starting with the neuron rather than the data, these electrical brain signals can also be computed from biophysics-based forward models, and this is topic of this podcast.
The most prominent visual characteristic of neurons is their dendrites.
Even more than 100 years after their first observation by Cajal, their function is not fully understood. Biophysical modeling based on cable theory is a key research tool for exploring putative functions, and today’s guest is one the leading researchers in this field.
We talk about of passive and active dendrites, the kind of filtering of synaptic inputs they support, the key role of synapse placements, and how the inclusion of dendrites may facilitate AI.
The greatest mystery of all is why a group of atoms, like the ones constituting me, can feel anything. The mind-brain problem has puzzled philosophers for millennia.
Thanks to pioneers like Christof Koch, consciousness studies have recently become a legitimate field of scientific inquiry.
In this vintage episode, recorded in February 2021, we discuss many aspects of the phenomenon, including an intriguing candidate theory: Integrated Information Theory.
Computational neuroscientists use many software tools, and NEURON has become the leading tool for biophysical modeling of neurons and neural network.
Today’s guest has been the leading developer of NEURON since the infancy almost 50 years ago.
We talk about how the tool got started and the development up until today’s modern version of the software, including CoreNEURON optimized for parallel execution of large-scale network models on multicore supercomputers.
The idea that memories are stored in molecules was popular in the middle of the 20th century. However, since the discovery of long-term potentiation (LTP) in the 1970s, the dominant view has been that our memories are stored in synapses, that is, in the connections between neurons.
Today, there are signs that the interest in molecular memory is returning, and the guest has presented a theory suggesting that molecular and synaptic memory might serve complementary needs for animals.
Is quantum physics important in determining how living systems, including brains, work?
Today's guest is a professor of molecular genetics at the University of Surrey in England and explores this question in the book “Life at the edge: The coming of age of quantum biology”.
In this “vintage” episode, recorded in late 2019, we talk about how quantum physics is or may be key in photosynthesis, smelling, navigation, evolution and even thinking. And we also touch on development of new antibiotics, another expertise of McFadden.
Most computational neuroscientists investigate electric dynamics in neurons or neural networks, but there is also computations going on inside neurons.
Here the key dynamical variables are concentrations of numerous different molecules, and the signaling is typically done in cascades of chemical reactions, called signaling pathways.
Today’s guest is an expert in this kind of modelling and is particularly interested in the signaling role of calcium.
Today’s AI is largely based on supervised learning of neural networks using the backpropagation-of-error synaptic learning rule. This learning rule relies on differentiation of continuous activation functions and is thus not directly applicable to spiking neurons.
Today’s guest has developed the algorithm SuperSpike to address the problem. He has also recently developed a biologically more plausible learning rule based on self-supervised learning. We talk about both.
Over the last ten years or so, the MindScope project at the Allen Institute in Seattle has pursued an industrylab-like approach to study the mouse visual cortex in unprecedented detail using electrophysiology, optophysiology, optical imaging and electron microscopy.
Together with collaborators at Allen, today’s guest has worked to integrate of these data into large-scale neural network, and in the podcast he talks about their ambitious endeavor.
Today’s guest is a pioneer both in the fields of computational neuroscience and artificial intelligence (AI) and has had a front seat during their development.
His many contributions include, for example, the invention of the Boltzmann machine with Ackley and Hinton in the mid 1980s.
In this “vintage” episode recorded in late 2019 he describes the joint births of these adjacent scientific fields and outlines how they came about.
The podcast currently has 18 episodes available.
5,353 Listeners
30,835 Listeners
32,038 Listeners
434 Listeners
43,161 Listeners
10,457 Listeners
7,531 Listeners
3,977 Listeners
130 Listeners
4,142 Listeners
5,178 Listeners
12,956 Listeners
412 Listeners
68 Listeners
542 Listeners