Federico Pierucci on Multi-Agent Risks in Humanitarian Aid at The Inference Layer Podcast Por  arte de portada

Federico Pierucci on Multi-Agent Risks in Humanitarian Aid at The Inference Layer

Federico Pierucci on Multi-Agent Risks in Humanitarian Aid at The Inference Layer

Escúchala gratis

Ver detalles del espectáculo
Co-produce by Humanitarian AI Today, this third pilot episode of The Inference Layer podcast bridges the technical complexities of AI deployment with the reality of humanitarian operations and dives into the transition from static models to autonomous agentic systems. On behalf of the Humanitarian AI Today podcast, guest host Patrick Hassan, an AI policy lead with a background in disaster response, interviews Federico Pierucci, Scientific Director of the Icaro Lab, to explore how the inference layer is becoming a site of significant systemic risk. The discussion provides a unique look at inference-time failures such as alignment drift and steganographic coordination that emerge only when multiple agents interact in production environments. For humanitarian actors, the episode raises concerns regarding operating in an era of assistance automated by layers of AI agents. The dialogue highlights how multi-agent chains used for beneficiary selection or resource allocation for example can degrade, develop invisible biases or be weaponized or politicized by parties to a conflict. Federico explains that these risks can be compounded by a lack of safety benchmarks for things like underrepresented languages and dialects, which can lead to unpredictable jailbreaks or administrative failures in the field. The episode provides an inside look at pioneering research being carried out by the Icaro Lab, a Rome-based laboratory specialized in AI safety in collaboration with the Sapienza University. The lab focuses on mechanistic interpretability, a technical field dedicated to understanding the internal attention heads and decision-making units of an AI to decipher how it truly processes information. The discussion introduces the concept of Institutional AI, a proposed framework to manage these emerging xeno-behaviors through a governance graph. Rather than relying solely on prompt engineering or model-level alignment, Federico argues for a protocol-level solution that can manage misbehaving agents during inference. The episode is informative for professionals seeking to understand why AI safety must evolve from a localized technical challenge into a global institutional design problem, particularly in regions where traditional governance has broken down. This particular episode moves beyond surface-level AI ethics and safety issues that the humanitarian community has been talking a lot about, to address inference-time vulnerabilities in agentic systems. This is an important topic because as the humanitarian community moves from developing and testing simple chatbots to incorporating autonomous multi-agent systems into humanitarian operations, we face new challenges that can have very serious consequences - making the 'inference layer' a new frontier for humanitarian risk.
Todavía no hay opiniones