Not only are AI chatbots still hallucinating; by some accounts, it's getting worse. Moreover, despite abundant coverage of the tendency of LLMs to make stuff up, people are still not fact-checking, leading to some embarrassing consequences. Even the legal team from Anthropic (the company behind the Claude frontier LLM) got caught.
Google has a new tool just for making AI videos with sound: what could possibly go wrong?Lack of strategic leadership and failure to communicate about AI's ethical use are two findings from a new Global Alliance reportPeople still matter. Some overly exuberant CEOs are walking back their AI-first proclamationsGoogle AI Overviews lead to a dramatic reduction in click-throughsGoogle is teaching American adults how to be adults. Should they be finding your content?In his tech report, Dan York looks at some services shutting down and others starting up.
Continue Reading →
The post FIR #466: Still Hallucinating After All These Years appeared first on FIR Podcast Network.