
Sign up to save your podcasts
Or


Sora Model and AI Video
OpenAI’s Sora model demonstrates how AI video has become nearly indistinguishable from real footage, reinforcing that AI progress continues to accelerate.
Hallucinations in LLMs
Mike Nedelko discussed an OpenAI paper reframing hallucinations as the result of training flaws and evaluation incentives, not mysterious behaviour. LLMs train in two phases: unsupervised pre-training (predicting the next word) and post-training (fine-tuning through human feedback and reinforcement learning).
Sources of Hallucinations
Hallucinations arise from singleton rate errors—rare, one-off facts—and intrinsic limitations, where models rely on statistical patterns rather than reasoning, as shown in the “strawberry problem.”
Flawed Evaluation Systems
Current evaluation systems reward correct guesses but not uncertainty, encouraging confident falsehoods. OpenAI proposes new benchmarks that reward calibrated honesty, though implementation remains challenging.
Complex Reasoning and Scale-Free Networks
LLMs struggle with complex reasoning compared to the brain’s scale-free network, which features interconnected hubs that enable adaptability and self-organization.
BDH (Dragon Hatchling) Architecture
The new BDH architecture mimics this biological design, achieving GPT-2-level performance with greater efficiency. As part of Axiomic AI, it aims for models that scale predictably and stably.
Emergent Attention and Interpretability
In BDH, attention emerges naturally from local neuron interactions, producing interpretable, brain-like behaviour with sparse, composable structures that could power future modular AI systems.
By Dillan Leslie-RoweSora Model and AI Video
OpenAI’s Sora model demonstrates how AI video has become nearly indistinguishable from real footage, reinforcing that AI progress continues to accelerate.
Hallucinations in LLMs
Mike Nedelko discussed an OpenAI paper reframing hallucinations as the result of training flaws and evaluation incentives, not mysterious behaviour. LLMs train in two phases: unsupervised pre-training (predicting the next word) and post-training (fine-tuning through human feedback and reinforcement learning).
Sources of Hallucinations
Hallucinations arise from singleton rate errors—rare, one-off facts—and intrinsic limitations, where models rely on statistical patterns rather than reasoning, as shown in the “strawberry problem.”
Flawed Evaluation Systems
Current evaluation systems reward correct guesses but not uncertainty, encouraging confident falsehoods. OpenAI proposes new benchmarks that reward calibrated honesty, though implementation remains challenging.
Complex Reasoning and Scale-Free Networks
LLMs struggle with complex reasoning compared to the brain’s scale-free network, which features interconnected hubs that enable adaptability and self-organization.
BDH (Dragon Hatchling) Architecture
The new BDH architecture mimics this biological design, achieving GPT-2-level performance with greater efficiency. As part of Axiomic AI, it aims for models that scale predictably and stably.
Emergent Attention and Interpretability
In BDH, attention emerges naturally from local neuron interactions, producing interpretable, brain-like behaviour with sparse, composable structures that could power future modular AI systems.