
Sign up to save your podcasts
Or
This week on IPWatchdog Unleashed we explore whether Artificial Intelligence (AI) technology has progressed to the point where it has already achieved consciousness. In a nutshell, the answer is our panel of technologists do not believe AI is very close to achieving consciousness, but that it is indeed possible for AI to reach the point of consciousness, and to even reach the point of self-reflection, which would pose an existential threat to humans.
Our conversation this week is from a panel presentation titled “Artificial Intelligence Today: A Discussion of the Technical Landscape of AI.” I moderated this conversation, which was between Jason Alany Snyder, who is Chief AI Officer for Momentum Worldwide, Malek Ben Salem, an AI expert, technologist and consultant, Dustin Raney, who is Head of Industry Strategy for Acxiom, and Dina Blinksteyn, who is partner and co-chair of the AI Practice Group at Haynes Boone.
We begin by asking whether AI has become sentient, and if not when we can expect AI will become sentient, which is a question I’ve asked Jason Alan Snyder each of the previous two years we have hosted an AI specific conference at IPWatchdog Studios. Two years ago, he predicted AI would become sentient within 15 years. Last year he predicted AI would become sentient within 14 years. Predictably perhaps, he agreed with his previous predictions and this year said, “13 years is probably a good guess,” said Snyder.
As the conversation unfolded, we spoke about whether hallucinations continue to be a problem for AI, whether the Turning test remains relevant with respect to defining AI, and fundamental aspects of what it means to be human. And we wrap up at the point where Snyder and Ben Salem discuss how AI could become an existential threat to humanity.
Visit us online at IPWatchdog.com.
You can also visit our channels at YouTube, LinkedIn, X, Instagram and Facebook.
5
55 ratings
This week on IPWatchdog Unleashed we explore whether Artificial Intelligence (AI) technology has progressed to the point where it has already achieved consciousness. In a nutshell, the answer is our panel of technologists do not believe AI is very close to achieving consciousness, but that it is indeed possible for AI to reach the point of consciousness, and to even reach the point of self-reflection, which would pose an existential threat to humans.
Our conversation this week is from a panel presentation titled “Artificial Intelligence Today: A Discussion of the Technical Landscape of AI.” I moderated this conversation, which was between Jason Alany Snyder, who is Chief AI Officer for Momentum Worldwide, Malek Ben Salem, an AI expert, technologist and consultant, Dustin Raney, who is Head of Industry Strategy for Acxiom, and Dina Blinksteyn, who is partner and co-chair of the AI Practice Group at Haynes Boone.
We begin by asking whether AI has become sentient, and if not when we can expect AI will become sentient, which is a question I’ve asked Jason Alan Snyder each of the previous two years we have hosted an AI specific conference at IPWatchdog Studios. Two years ago, he predicted AI would become sentient within 15 years. Last year he predicted AI would become sentient within 14 years. Predictably perhaps, he agreed with his previous predictions and this year said, “13 years is probably a good guess,” said Snyder.
As the conversation unfolded, we spoke about whether hallucinations continue to be a problem for AI, whether the Turning test remains relevant with respect to defining AI, and fundamental aspects of what it means to be human. And we wrap up at the point where Snyder and Ben Salem discuss how AI could become an existential threat to humanity.
Visit us online at IPWatchdog.com.
You can also visit our channels at YouTube, LinkedIn, X, Instagram and Facebook.
38,594 Listeners
38,175 Listeners
11,743 Listeners
24 Listeners
2,396 Listeners
153,475 Listeners
326 Listeners
110,976 Listeners
32,417 Listeners
8,761 Listeners
5,399 Listeners
2 Listeners
8,538 Listeners
469 Listeners
444 Listeners