
Sign up to save your podcasts
Or


In this session, we tackle the "Paradox of Modern AI": systems built to sound incredibly intelligent (fluent) before they were built to be reliably truthful. We explore the cognitive architecture behind why AI lies, the specific taxonomy of hallucinations, and the "Epistemic Hygiene" toolkit you need to move from a passive consumer to an active verifier.
By hansoneducationservicesIn this session, we tackle the "Paradox of Modern AI": systems built to sound incredibly intelligent (fluent) before they were built to be reliably truthful. We explore the cognitive architecture behind why AI lies, the specific taxonomy of hallucinations, and the "Epistemic Hygiene" toolkit you need to move from a passive consumer to an active verifier.