
Sign up to save your podcasts
Or
OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it can’t do.
Now it’s had an upgrade. GPT-4 has even more incredible abilities, it can take in photos as an input, and deliver smoother, more natural writing to the user. But it also hallucinates, throws up false answers, and remains unable to reference any world events that happened after September 2021.
Seeking to get under the hood of the Large Language Model that operates GPT-4, host Alok Jha speaks with Maria Laikata, a professor in Natural Language Processing at Queen Mary University of London. We put the technology through its paces with The Economist’s tech-guru Ludwig Seigele, and even run it through something like a Turing Test to give an idea of whether it could pass for human-level-intelligence.
An Artificial General Intelligence is the ultimate goal of AI research, so how significant will GPT-4 and similar technologies be in the grand scheme of machine intelligence? Not very, suggests Gary Marcus, expert in both AI and human intelligence, though they will impact all of our lives both in good and bad ways.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.
4.8
568568 ratings
OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it can’t do.
Now it’s had an upgrade. GPT-4 has even more incredible abilities, it can take in photos as an input, and deliver smoother, more natural writing to the user. But it also hallucinates, throws up false answers, and remains unable to reference any world events that happened after September 2021.
Seeking to get under the hood of the Large Language Model that operates GPT-4, host Alok Jha speaks with Maria Laikata, a professor in Natural Language Processing at Queen Mary University of London. We put the technology through its paces with The Economist’s tech-guru Ludwig Seigele, and even run it through something like a Turing Test to give an idea of whether it could pass for human-level-intelligence.
An Artificial General Intelligence is the ultimate goal of AI research, so how significant will GPT-4 and similar technologies be in the grand scheme of machine intelligence? Not very, suggests Gary Marcus, expert in both AI and human intelligence, though they will impact all of our lives both in good and bad ways.
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.
Hosted on Acast. See acast.com/privacy for more information.
4,262 Listeners
528 Listeners
926 Listeners
363 Listeners
103 Listeners
216 Listeners
110 Listeners
715 Listeners
2,526 Listeners
47 Listeners
1,083 Listeners
1,393 Listeners
115 Listeners
101 Listeners
36 Listeners
881 Listeners
350 Listeners
505 Listeners
115 Listeners
64 Listeners
66 Listeners
78 Listeners
97 Listeners
233 Listeners