By David Stephen who looks at the latest trends from CES 2026
The Consumer Technology Association held the Consumer Electronics Show 2026 in Las Vegas, from January 6 – 9, with AI as the dominant theme. But if AI is accelerating, what becomes of human intelligence?
Who, on earth, has custody of human intelligence? Simply, there are several artificial intelligence research labs, startups, companies, projects, conferences which can be described as custody of AI, but who exactly is working directly on how to improve human intelligence, anywhere in the world?
Latest trends from CES 2026
What has become of humanity is evident with what is happening with the race towards artificial general intelligence [AGI]. How can, say a company raise $20 billion for more AI, including data centers and there is no consensus at least on the definition of human intelligence or what it is in the brain?
Why is there no project on how to improve human intelligence for problem-solving? How is it not possible to at least be able to describe the difference between what is labeled as memory and what is called intelligence?
If humanity is doing so much for AI, with the assumption that humanity will benefit, what is humanity doing for human intelligence, with the assumption that humanity will benefit?
If everything that has improved humanity came by human intelligence, does it not show that prospects abound for human intelligence? Where, if more is done to extricate details [even conceptual] about it, in the brain, it would be decisive.
While AI is immensely helpful and AGI would be even more helpful, does it not mean that the time for the quest into solving what human intelligence is, is ripe to ensure that humanity stays competitive if AGI becomes reality?
How is human intelligence not the concern of any team, any university, any company, any organization or any nation? How is it not clear that because the world is still run by the law of ownership, artificial intelligence will mostly belong to artificial intelligence? If humanity decides to give up on certain [intelligence-based] complex attempts at scale, with outsourcing to artificial intelligence, would that not mean the horizon of a fall?
CES 2026
There is a recent article on TechCrunch, The most bizarre tech announced so far at CES 2026, stating that, "While CES 2026 is full of tech giants unveiling their latest innovations, the real excitement comes from discovering unexpected, quirky gadgets that make you ask, "Who thought of this?""
"We're here to spotlight the wildest products we've found so far at CES 2026, from an AI-powered panda that responds to your touch, to Razer's holographic anime assistant, and plenty more weirdness that makes you do a double-take: An AI anime companion that watches you from your desk. A cuddly AI baby panda robot for older adults. A $500 ice cube maker that uses AI to reduce noise. An ultrasonic chef's knife that vibrates when slicing and dicing. A musical lollipop that plays Ice Spice in your head."
Human Intelligence Research Lab
The biggest problem with the human mind is that it applies to certain changes almost immediately, and with continuity, it may not revert to the old state.
As AI use percolates productivity, relationships, companionships, academics and much more, there are relays that the human mind would miss that may affect some untapped capabilities for problem-solving at a later time. This is a risk for human intelligence.
Intelligence can be defined as the use of memory for expected, desired or advantageous outcomes. This means that intelligence is mostly relays, while memory is mostly stations.
So, the quality of relays across stations can mean intelligence. Now, if this quality continues to wane, because some other assistant is available, it may not be good in some cases, where advantage would have been possible, say over AI or some obstacles.
Humanity is not yet at the stage of a smart pill, nootropics, intelligence pill, smart drugs or cognitive ...