
Sign up to save your podcasts
Or


The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term.
Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:
By Tech Policy Press4.6
2828 ratings
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term.
Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:

10,749 Listeners

3,155 Listeners

495 Listeners

6,305 Listeners

287 Listeners

1,604 Listeners

387 Listeners

560 Listeners

258 Listeners

5,477 Listeners

16,095 Listeners

3,453 Listeners

43 Listeners

315 Listeners

71 Listeners