The Inference Layer

Federico Pierucci on Multi-Agent Risks in Humanitarian Aid at The Inference Layer


Listen Later

This third pilot episode of The Inference Layer bridges the technical complexities of AI deployment with the reality of humanitarian operations, featuring a deep dive into the transition from static models to autonomous agentic systems. On behalf of the Humanitarian AI Today podcast, guest host Patrick Hassan, an AI policy lead with a background in disaster response, interviews Federico Pierucci, Scientific Director of the Icaro Lab, to explore how the inference layer is becoming a site of significant systemic risk. The discussion provides a unique look at inference-time failures such as alignment drift and steganographic coordination that emerge only when multiple agents interact in production environments.


For humanitarian actors, the episode raises concerns regarding operating in an era of assistance automated by layers of AI agents. The dialogue highlights how multi-agent chains used for beneficiary selection or resource allocation for example can degrade, develop invisible biases or be weaponized or politicized by parties to a conflict. Federico explains that these risks can be compounded by a lack of safety benchmarks for things like underrepresented languages and dialects, which can lead to unpredictable jailbreaks or administrative failures in the field.

The episode provides an inside look at pioneering research being carried out by the Icaro Lab, a Rome-based laboratory specialized in AI safety in collaboration with the Sapienza University. The lab focuses on mechanistic interpretability, a technical field dedicated to understanding the internal attention heads and decision-making units of an AI to decipher how it truly processes information. The discussion introduces the concept of Institutional AI, a proposed framework to manage these emerging xeno-behaviors through a governance graph. Rather than relying solely on prompt engineering or model-level alignment, Federico argues for a protocol-level solution that can manage misbehaving agents during inference. The episode is informative for professionals seeking to understand why AI safety must evolve from a localized technical challenge into a global institutional design problem, particularly in regions where traditional governance has broken down.

...more
View all episodesView all episodes
Download on the App Store

The Inference LayerBy inferencelayer.ai