
Sign up to save your podcasts
Or
The more the science of intelligence (both human and artificial) advances, the more it holds the potential for great benefits and dangers to society.
Max Bennett is the cofounder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Previously, Bennett was the cofounder and chief product officer of Bluecore, one of the fastest growing companies in the U.S., providing AI technologies to some of the largest companies in the world. Bluecore has been featured in the annual Inc. 500 fastest growing companies, as well as Glassdoor’s 50 best places to work in the U.S. Bluecore was recently valued at over $1 billion. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuroscience and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30. He is the author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
"So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model."
www.abriefhistoryofintelligence.com/
www.alby.com
www.bluecore.com
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast
4.7
1414 ratings
The more the science of intelligence (both human and artificial) advances, the more it holds the potential for great benefits and dangers to society.
Max Bennett is the cofounder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Previously, Bennett was the cofounder and chief product officer of Bluecore, one of the fastest growing companies in the U.S., providing AI technologies to some of the largest companies in the world. Bluecore has been featured in the annual Inc. 500 fastest growing companies, as well as Glassdoor’s 50 best places to work in the U.S. Bluecore was recently valued at over $1 billion. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuroscience and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30. He is the author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
"So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model."
www.abriefhistoryofintelligence.com/
www.alby.com
www.bluecore.com
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast
7,812 Listeners
474 Listeners
293 Listeners
111,539 Listeners
144 Listeners
4,101 Listeners
196 Listeners
5,892 Listeners
253 Listeners
8,706 Listeners
260 Listeners
153 Listeners
17 Listeners
69 Listeners
51 Listeners
88 Listeners
33 Listeners
33 Listeners
35 Listeners
46 Listeners
32 Listeners
39 Listeners
46 Listeners
26 Listeners
13 Listeners
64 Listeners
421 Listeners
231 Listeners
140 Listeners
7 Listeners
7 Listeners
11 Listeners
25 Listeners
2 Listeners
3 Listeners