
Sign up to save your podcasts
Or


Giuseppe Birardi is CTO of Orma Lab, an Italian consultancy focused on R&D and industrial AI.
A former researcher turned PM and developer, he also co-organizes the Python Bar community.
What we cover:
- The geometry of meaning: Embeddings turn words into vectors with direction and magnitude, allowing context to “pull” meanings (money-bank vs. river-bank). Dimensionality reduction compresses co-occurrence statistics into 500–700D spaces where vectors become transformative, not static points.
- Inside transformers: Multi-head attention re-weights tokens to resolve ambiguity across a sequence, then MLP layers with ReLU activations “fold” the space—think approximating a circle with linear cuts after repeated folds—so nonlinear problems become linearly separable.
- Prompt engineering as activation: How phrasing can turn on skills learned during training (e.g., TL;DR for summarization, “step by step” for task decomposition). Why chain-of-thought often simulates reasoning and can still hallucinate.
- Probing the latent space: From mechanistic interpretability to feature-level observability in open models (e.g., GemmaScope) and why steering features is promising but not yet turnkey for production. Concrete example: apparent “decryption” often reflects seen patterns (like Caesar shifts) rather than true cryptanalysis.
Why it matters: If you’re building RAG or agentic applications, these mental models help you design better prompts, set up experiments, choose SOTA models first, and then optimize cost/latency. Giuseppe also shares how Italian firms—via soft-finance-backed R&D—are moving real AI products into production across domains.
If you want to learn more about the geometry of the latent space: https://www.lesswrong.com/posts/nfGZtKzz8WzxF3MAs/on-the-geometrical-nature-of-insight
Connect with Giuseppe:
- LinkedIn: https://www.linkedin.com/in/giuseppe-birardi-18a7b011/
Connect with me:
- LinkedIn: https://www.linkedin.com/in/christianbarra/
Check out our awesome sponsor, zerobang, your 0 to 1 AI partner: www.zerobang.dev
By Christian BarraGiuseppe Birardi is CTO of Orma Lab, an Italian consultancy focused on R&D and industrial AI.
A former researcher turned PM and developer, he also co-organizes the Python Bar community.
What we cover:
- The geometry of meaning: Embeddings turn words into vectors with direction and magnitude, allowing context to “pull” meanings (money-bank vs. river-bank). Dimensionality reduction compresses co-occurrence statistics into 500–700D spaces where vectors become transformative, not static points.
- Inside transformers: Multi-head attention re-weights tokens to resolve ambiguity across a sequence, then MLP layers with ReLU activations “fold” the space—think approximating a circle with linear cuts after repeated folds—so nonlinear problems become linearly separable.
- Prompt engineering as activation: How phrasing can turn on skills learned during training (e.g., TL;DR for summarization, “step by step” for task decomposition). Why chain-of-thought often simulates reasoning and can still hallucinate.
- Probing the latent space: From mechanistic interpretability to feature-level observability in open models (e.g., GemmaScope) and why steering features is promising but not yet turnkey for production. Concrete example: apparent “decryption” often reflects seen patterns (like Caesar shifts) rather than true cryptanalysis.
Why it matters: If you’re building RAG or agentic applications, these mental models help you design better prompts, set up experiments, choose SOTA models first, and then optimize cost/latency. Giuseppe also shares how Italian firms—via soft-finance-backed R&D—are moving real AI products into production across domains.
If you want to learn more about the geometry of the latent space: https://www.lesswrong.com/posts/nfGZtKzz8WzxF3MAs/on-the-geometrical-nature-of-insight
Connect with Giuseppe:
- LinkedIn: https://www.linkedin.com/in/giuseppe-birardi-18a7b011/
Connect with me:
- LinkedIn: https://www.linkedin.com/in/christianbarra/
Check out our awesome sponsor, zerobang, your 0 to 1 AI partner: www.zerobang.dev