To talk about artificial general intelligence (AGI) you need to have a coherent and valid construct — not just a definition — of intelligence, ideally a measure.
But hey! I don't care about this, I'm just sick of hearing about it in every third post on my feed.
Here is what leaders in the field think: "The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.
[...] Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." Former Baidu Vice President and Chief Scientist Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."" (https://archive.is/HSEqf#:~:text=The%20thesis%20that%20AI%20can,the%20planet%20yet.%22%5B122%5D)
[...] The lack of a clear, universally accepted definition is not unique to "AGI." For instance, “AI” also has many different meanings within the AI research community, with no clear consensus n the definition. “Intelligence” is also a fairly vague concept; Legg and Hutter wrote a paper summarizing and organizing over 70 different published definitions of ”intelligence”, most oriented toward general intelligence, emanating from researchers in a variety of disciplines (Legg and Hutter, 2007)." (per: https://archive.is/LUOFo)
If "AGI" = "AI is gonna do a bunch of human stuff", well, it's already doing that. "We didn't think AI could beat Go pla--" yes we did; many of us did. There will be one for every esport soon enough. This is irrelevant.
There is pretty clearly widespread differences among leaders in the field in ways to construe "AGI", so it's foolish to say something like "AGI is near!!!" when you're not even clear on what you mean by "general intelligence" nevermind "artificial" general intelligence.
Reality check:
1. People will dispute what physical properties constitute "consciousness" forever.
2. Even if they don't, the harder part is finding non-arbitrary criteria for consciousness, and especially *kinds* of consciousnesses.
3. AI are created by a priori formalizations of human thought. Human beings are created by adaptive evolution that does no formalization whatsoever. AI **cannot replicate this by definition** unless it also replicates human evolutionary adaptation. Otherwise, they are fundamentally and qualitatively different phenomena, even if they "pass".
4. Many/most of you either don't understand or barely understand what is on a WAIS, aka the standard measure for general intelligence, nevermind the phenomenology of all human experience and what would be required for this to be replicated by AI. You're being silly.
5. Speaking of being silly, there's a limited amount our brains can do; our brains are the bottleneck for what we can make AI do. Perhaps something like https://archive.is/Opz5F or https://archive.is/vvb5y is the next step.
Either way, we will ultimately need genetic advancement to push civilization to a new paradigm; AI alone will not do this.