AI Today

Hallucination: a bitter pill to swallow


Listen Later

AI hallucinates 100% of the time. That's by design - without hallucinating the next word, this transformer architecture wouldn't exist.

Thankfully, LLMs built for general purpose applications are right 80% of the time. But that still leaves one in five outputs being questionable; not especially reassuring if you're an air traffic controller, or cardiologist.

How can we ever truly trust the machine?

On this episode of AI Today, we embark on a groundbreaking quest to ground these 'digital dreamers' in reality.

Discover how cutting-edge research is moving beyond just detecting the problem, to actively reducing the occurrence of incorrect hallucinations.

We delve into innovative techniques that employ internal fact-checking mechanisms, intelligently split complex queries to avoid confusing collisions, and meticulously track word-by-word groundedness against source material.

You'll learn how this confidence-boosting research is paving the way for the AI credibility revolution, a future where technology is not just remarkably powerful, but significantly more dependable.

Join us to understand the innovative solutions building AI you can rely on, where AI becomes trusted accelerator of success...

...more
View all episodesView all episodes
Download on the App Store

AI TodayBy Dave Thackeray