
Sign up to save your podcasts
Or


Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.
In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.
We cover:
• Is AGI inevitable? How soon will it arrive?
• Will AGI kill us … or save us?
• Why decentralization and blockchain could make AGI safer
• How large language models (LLMs) fit into the path toward AGI
• The risks of an AGI arms race between the U.S. and China
• Why Ben Goertzel created Meta, a new AGI programming language
📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.
⏱️ Chapters
00:00 – Intro: Will AGI kill us or save us?
01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference
02:47 – Is AGI inevitable?
05:08 – Defining AGI: generalization beyond programming
07:15 – Emotions, agency, and artificial minds
08:47 – The AGI arms race: US vs. China vs. decentralization
13:09 – Risks of narrow or bounded AGI
15:27 – Decentralization and open-source as safeguards
18:21 – Can LLMs become AGI?
20:18 – Using LLMs as reasoning guides
21:55 – Hybrid models: LLMs plus reasoning engines
23:22 – Hallucination: humans vs. machines
25:26 – How LLMs accelerate AI research
26:55 – How close are we to AGI?
28:18 – Why Goertzel built a new AGI language (Meta)
29:43 – Meta: from AI coding to smart contracts
30:06 – Closing thoughts
By John Koetsier4.7
1414 ratings
Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.
In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.
We cover:
• Is AGI inevitable? How soon will it arrive?
• Will AGI kill us … or save us?
• Why decentralization and blockchain could make AGI safer
• How large language models (LLMs) fit into the path toward AGI
• The risks of an AGI arms race between the U.S. and China
• Why Ben Goertzel created Meta, a new AGI programming language
📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.
⏱️ Chapters
00:00 – Intro: Will AGI kill us or save us?
01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference
02:47 – Is AGI inevitable?
05:08 – Defining AGI: generalization beyond programming
07:15 – Emotions, agency, and artificial minds
08:47 – The AGI arms race: US vs. China vs. decentralization
13:09 – Risks of narrow or bounded AGI
15:27 – Decentralization and open-source as safeguards
18:21 – Can LLMs become AGI?
20:18 – Using LLMs as reasoning guides
21:55 – Hybrid models: LLMs plus reasoning engines
23:22 – Hallucination: humans vs. machines
25:26 – How LLMs accelerate AI research
26:55 – How close are we to AGI?
28:18 – Why Goertzel built a new AGI language (Meta)
29:43 – Meta: from AI coding to smart contracts
30:06 – Closing thoughts

43,836 Listeners

32,261 Listeners

1,107 Listeners

565 Listeners

547 Listeners

87,990 Listeners

113,483 Listeners

819 Listeners

5,122 Listeners

12 Listeners

10,223 Listeners

5,596 Listeners

16,539 Listeners

143 Listeners

10 Listeners