
Sign up to save your podcasts
Or
AI Articles (Zero Hedge)
* https://x.com/Artemisfornow/status/1954207559109218738
* https://www.zerohedge.com/technology/200000-wall-street-jobs-risk-agentic-ai-agents-become-major-breakthrough
* https://www.zerohedge.com/ai/softbanks-masayoshi-son-ready-hand-over-reins-he-goes-all-ai
* https://www.zerohedge.com/ai/pentagon-awards-contracts-4-artificial-intelligence-developers - “Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.”
* Jim Rickards on AI:
AI Lacks Common Sense
Another limitation on AI, which is not well known, is the Law of Conservation of Information in Search. This law is backed up by rigorous mathematical proofs. What it says is that AI cannot find any new information. It can find things faster and it can make connections that humans might find almost impossible to make. That’s valuable. But AI cannot find anything new. It can only seek out and find information that is already there for the taking. New knowledge comes from humans in the form of creativity, art, writing and original work. Computers cannot perform genuinely creative tasks. That should give humans some comfort that they will never be obsolete.
A further problem in AI is dilution and degradation of training sets as more training set content consists of AI output from prior processing. AI is prone to errors, hallucinations (better called confabulations) and inferences that have no basis in fact. That’s bad enough. But when that output enters the training set (basically every page in the internet), the quality of the training set degrades, and future output degrades in sync. There’s no good solution to this except careful curation. If you have to be a subject matter expert to curate training sets and then evaluate output, this greatly diminishes the value-added role of AI.
….
High-flying AI companies are quickly finding that their systems can be outperformed by newer systems that simply use big ticket AI output as a baseline training set. This is a shortcut to high performance at a small fraction of the cost. The establishment AI companies like Microsoft and Google call this theft of IP, but it’s no worse than those giants using existing IP (including my books, by the way) without paying royalties. It may be a form of piracy, but it’s easy to do and almost impossible to stop. This does not mean the end of AI. It means the end of sky-high profit projections for AI. The return on the hundreds of billions of dollars being spent by the AI giants may be meager. [My emphasis.]
AI Articles (Zero Hedge)
* https://x.com/Artemisfornow/status/1954207559109218738
* https://www.zerohedge.com/technology/200000-wall-street-jobs-risk-agentic-ai-agents-become-major-breakthrough
* https://www.zerohedge.com/ai/softbanks-masayoshi-son-ready-hand-over-reins-he-goes-all-ai
* https://www.zerohedge.com/ai/pentagon-awards-contracts-4-artificial-intelligence-developers - “Anthropic, Google, OpenAI, and xAI will each receive a contracting award with a ceiling of $200 million, according to a statement shared by the Chief Digital and Artificial Intelligence Office.”
* Jim Rickards on AI:
AI Lacks Common Sense
Another limitation on AI, which is not well known, is the Law of Conservation of Information in Search. This law is backed up by rigorous mathematical proofs. What it says is that AI cannot find any new information. It can find things faster and it can make connections that humans might find almost impossible to make. That’s valuable. But AI cannot find anything new. It can only seek out and find information that is already there for the taking. New knowledge comes from humans in the form of creativity, art, writing and original work. Computers cannot perform genuinely creative tasks. That should give humans some comfort that they will never be obsolete.
A further problem in AI is dilution and degradation of training sets as more training set content consists of AI output from prior processing. AI is prone to errors, hallucinations (better called confabulations) and inferences that have no basis in fact. That’s bad enough. But when that output enters the training set (basically every page in the internet), the quality of the training set degrades, and future output degrades in sync. There’s no good solution to this except careful curation. If you have to be a subject matter expert to curate training sets and then evaluate output, this greatly diminishes the value-added role of AI.
….
High-flying AI companies are quickly finding that their systems can be outperformed by newer systems that simply use big ticket AI output as a baseline training set. This is a shortcut to high performance at a small fraction of the cost. The establishment AI companies like Microsoft and Google call this theft of IP, but it’s no worse than those giants using existing IP (including my books, by the way) without paying royalties. It may be a form of piracy, but it’s easy to do and almost impossible to stop. This does not mean the end of AI. It means the end of sky-high profit projections for AI. The return on the hundreds of billions of dollars being spent by the AI giants may be meager. [My emphasis.]