
Sign up to save your podcasts
Or
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term.
Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:
4.6
2828 ratings
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term.
Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:
436 Listeners
6,293 Listeners
3,144 Listeners
10,700 Listeners
269 Listeners
1,472 Listeners
394 Listeners
537 Listeners
259 Listeners
5,461 Listeners
15,321 Listeners
3,350 Listeners
44 Listeners
315 Listeners
72 Listeners