
Sign up to save your podcasts
Or


The more the science of intelligence (both human and artificial) advances, the more it holds the potential for great benefits and dangers to society.
Max Bennett is the cofounder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Previously, Bennett was the cofounder and chief product officer of Bluecore, one of the fastest growing companies in the U.S., providing AI technologies to some of the largest companies in the world. Bluecore has been featured in the annual Inc. 500 fastest growing companies, as well as Glassdoor’s 50 best places to work in the U.S. Bluecore was recently valued at over $1 billion. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuroscience and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30. He is the author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
"So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model."
www.abriefhistoryofintelligence.com/
www.alby.com
www.bluecore.com
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast
By The Creative Process Original Series: Artificial Intelligence, Technology, Innovation, Engineering, Robotics & Internet of Things4.7
1414 ratings
The more the science of intelligence (both human and artificial) advances, the more it holds the potential for great benefits and dangers to society.
Max Bennett is the cofounder and CEO of Alby, a start-up that helps companies integrate large language models into their websites to create guided shopping and search experiences. Previously, Bennett was the cofounder and chief product officer of Bluecore, one of the fastest growing companies in the U.S., providing AI technologies to some of the largest companies in the world. Bluecore has been featured in the annual Inc. 500 fastest growing companies, as well as Glassdoor’s 50 best places to work in the U.S. Bluecore was recently valued at over $1 billion. Bennett holds several patents for AI technologies and has published numerous scientific papers in peer-reviewed journals on the topics of evolutionary neuroscience and the neocortex. He has been featured on the Forbes 30 Under 30 list as well as the Built In NYC’s 30 Tech Leaders Under 30. He is the author of A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.
"So, modern neuroscientists are questioning if there really is one consistent limbic system. But usually when we're looking at the limbic system, we're thinking about things like emotion, volition, and goals. And those types of things, I would argue reinforcement learning algorithms, at least on a primitive level, we already have because the way that we get them to achieve goals like play a game of go and win is we give them a reward signal or a reward function. And then we let them self-play and teach themselves based on maximizing that reward. But that doesn't mean that they're self-aware, doesn't mean that they're experiencing anything at all. There's a fascinating set of questions in the AI community around what's called the reward hypothesis, which is how much of intelligent behavior can be understood through the lens of just trying to optimize a reward signal. We are more than just trying to optimize reward signals. We do things to try and reinforce our own identities. We do things to try and understand ourselves. These are attributes that are hard to explain from a simple reward signal, but do make sense. And other conceptions of intelligence like Karl Friston's active inference where we build a model of ourselves and try and reinforce that model."
www.abriefhistoryofintelligence.com/
www.alby.com
www.bluecore.com
www.creativeprocess.info
www.oneplanetpodcast.org
IG www.instagram.com/creativeprocesspodcast

32,007 Listeners

229,029 Listeners

87,317 Listeners

112,360 Listeners

8,522 Listeners

278 Listeners

15,930 Listeners

2 Listeners

149 Listeners

547 Listeners

154 Listeners

608 Listeners

26 Listeners

51 Listeners

55 Listeners

46 Listeners

35 Listeners

7 Listeners

88 Listeners

33 Listeners

13 Listeners

7 Listeners

18 Listeners

33 Listeners

39 Listeners

82 Listeners

11 Listeners

35 Listeners

2 Listeners

3 Listeners

8,602 Listeners

1,088 Listeners