
Sign up to save your podcasts
Or


We dissect Francois Fleuret's Free Transformer, which injects a learned latent variable Z into autoregressive generation via a tiny CVAE-like encoder. With only one extra non-causal block, it introduces minimal overhead yet unlocks high-level planning that improves reasoning on benchmarks. We compare latent planning to explicit chain-of-thought and ponder how combining latent and explicit reasoning could unlock new capabilities.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC
By Mike BreaultWe dissect Francois Fleuret's Free Transformer, which injects a learned latent variable Z into autoregressive generation via a tiny CVAE-like encoder. With only one extra non-causal block, it introduces minimal overhead yet unlocks high-level planning that improves reasoning on benchmarks. We compare latent planning to explicit chain-of-thought and ponder how combining latent and explicit reasoning could unlock new capabilities.
Note: This podcast was AI-generated, and sometimes AI can make mistakes. Please double-check any critical information.
Sponsored by Embersilk LLC