
Sign up to save your podcasts
Or


OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it can’t do.
Now it’s had an upgrade. GPT-4 has even more incredible abilities, it can take in photos as an input, and deliver smoother, more natural writing to the user. But it also hallucinates, throws up false answers, and remains unable to reference any world events that happened after September 2021.
Seeking to get under the hood of the Large Language Model that operates GPT-4, host Alok Jha speaks with Maria Laikata, a professor in Natural Language Processing at Queen Mary University of London. We put the technology through its paces with The Economist’s tech-guru Ludwig Seigele, and even run it through something like a Turing Test to give an idea of whether it could pass for human-level-intelligence.
An Artificial General Intelligence is the ultimate goal of AI research, so how significant will GPT-4 and similar technologies be in the grand scheme of machine intelligence? Not very, suggests Gary Marcus, expert in both AI and human intelligence, though they will impact all of our lives both in good and bad ways.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.
By The Economist4.8
582582 ratings
OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it can’t do.
Now it’s had an upgrade. GPT-4 has even more incredible abilities, it can take in photos as an input, and deliver smoother, more natural writing to the user. But it also hallucinates, throws up false answers, and remains unable to reference any world events that happened after September 2021.
Seeking to get under the hood of the Large Language Model that operates GPT-4, host Alok Jha speaks with Maria Laikata, a professor in Natural Language Processing at Queen Mary University of London. We put the technology through its paces with The Economist’s tech-guru Ludwig Seigele, and even run it through something like a Turing Test to give an idea of whether it could pass for human-level-intelligence.
An Artificial General Intelligence is the ultimate goal of AI research, so how significant will GPT-4 and similar technologies be in the grand scheme of machine intelligence? Not very, suggests Gary Marcus, expert in both AI and human intelligence, though they will impact all of our lives both in good and bad ways.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.

4,237 Listeners

780 Listeners

931 Listeners

363 Listeners

97 Listeners

108 Listeners

676 Listeners

234 Listeners

2,590 Listeners

47 Listeners

1,086 Listeners

1,409 Listeners

151 Listeners

115 Listeners

102 Listeners

37 Listeners

495 Listeners

892 Listeners

370 Listeners

498 Listeners

79 Listeners

196 Listeners

147 Listeners

72 Listeners

100 Listeners

261 Listeners

32 Listeners