
Sign up to save your podcasts
Or


Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide.
Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.
AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.
For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.
Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
How Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...
Listen on: Apple Podcasts Spotify
By London Futurists4.7
99 ratings
Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide.
Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.
AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.
For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.
Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI.
Selected follow-ups:
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
How Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...
Listen on: Apple Podcasts Spotify

32,003 Listeners

15,212 Listeners

890 Listeners

26,330 Listeners

502 Listeners

367 Listeners

4,182 Listeners

318 Listeners

201 Listeners

511 Listeners

5,512 Listeners

138 Listeners

545 Listeners

587 Listeners

1,427 Listeners