
Sign up to save your podcasts
Or
In this episode of The Profound Podcast, I sit down with Dr. Jabe Bloom, a researcher and expert in systems thinking, AI, and digital transformation. We explore Eric Lawson’s book The Myth of AI, tackling the contentious debate around artificial general intelligence (AGI). Dr. Bloom offers insights from his dissertation and divides the ongoing discourse on AI into two camps: dogmatists and pragmatists. Dogmatists believe AGI is inevitable, while pragmatists focus on the practical impacts of current AI technology, such as large language models (LLMs), and how these will reshape businesses, education, and society.
Throughout the episode, Dr. Bloom explains his framework for thinking about AI, touching on proactionary versus precautionary approaches to its development and regulation. He also draws connections between these ideas and W. Edwards Deming’s principles, especially around abductive reasoning—a concept that links back to Dr. Bloom’s past discussions about AI’s potential in problem-solving.
The conversation takes a critical view of AGI's feasibility, with Dr. Bloom emphasizing the current challenges AI faces in replicating abductive reasoning, which involves making intelligent guesses—a capability he argues machines have yet to achieve. We also dive into examples from fields like DevOps, healthcare, and city planning, discussing where AI has shown great promise and where it still falls short.
Key takeaways from the episode include the importance of addressing present AI technologies and their immediate impacts on work and society, as well as the ongoing need for human oversight and critique when using AI systems.
5
33 ratings
In this episode of The Profound Podcast, I sit down with Dr. Jabe Bloom, a researcher and expert in systems thinking, AI, and digital transformation. We explore Eric Lawson’s book The Myth of AI, tackling the contentious debate around artificial general intelligence (AGI). Dr. Bloom offers insights from his dissertation and divides the ongoing discourse on AI into two camps: dogmatists and pragmatists. Dogmatists believe AGI is inevitable, while pragmatists focus on the practical impacts of current AI technology, such as large language models (LLMs), and how these will reshape businesses, education, and society.
Throughout the episode, Dr. Bloom explains his framework for thinking about AI, touching on proactionary versus precautionary approaches to its development and regulation. He also draws connections between these ideas and W. Edwards Deming’s principles, especially around abductive reasoning—a concept that links back to Dr. Bloom’s past discussions about AI’s potential in problem-solving.
The conversation takes a critical view of AGI's feasibility, with Dr. Bloom emphasizing the current challenges AI faces in replicating abductive reasoning, which involves making intelligent guesses—a capability he argues machines have yet to achieve. We also dive into examples from fields like DevOps, healthcare, and city planning, discussing where AI has shown great promise and where it still falls short.
Key takeaways from the episode include the importance of addressing present AI technologies and their immediate impacts on work and society, as well as the ongoing need for human oversight and critique when using AI systems.
7,653 Listeners
417 Listeners
38 Listeners
625 Listeners
4,149 Listeners
33,939 Listeners
12 Listeners
9,181 Listeners
25 Listeners
5,098 Listeners
470 Listeners
2,132 Listeners
144 Listeners
18 Listeners
39 Listeners