Humanitarian AI Today Podcast Por Humanitarian AI Today arte de portada

Humanitarian AI Today

Humanitarian AI Today

De: Humanitarian AI Today
Escúchala gratis

Humanitarian AI Today is the leading AI for Good podcast series focusing on humanitarian applications of artificial intelligence. We interview leaders, developers and innovators advancing humanitarian applications of AI from across the tech and humanitarian communities. The series is produced by the Humanitarian AI meetup.com community, linking local groups in Cambridge, San Francisco, Seattle, New York City, Toronto, Montreal, London, Paris, Berlin, Oslo, Geneva, Zurich, Bangalore, Tel Aviv and Tokyo.All rights reserved
Episodios
  • Federico Pierucci on Multi-Agent Risks in Humanitarian Aid at The Inference Layer
    Mar 19 2026
    Co-produce by Humanitarian AI Today, this third pilot episode of The Inference Layer podcast bridges the technical complexities of AI deployment with the reality of humanitarian operations and dives into the transition from static models to autonomous agentic systems. On behalf of the Humanitarian AI Today podcast, guest host Patrick Hassan, an AI policy lead with a background in disaster response, interviews Federico Pierucci, Scientific Director of the Icaro Lab, to explore how the inference layer is becoming a site of significant systemic risk. The discussion provides a unique look at inference-time failures such as alignment drift and steganographic coordination that emerge only when multiple agents interact in production environments. For humanitarian actors, the episode raises concerns regarding operating in an era of assistance automated by layers of AI agents. The dialogue highlights how multi-agent chains used for beneficiary selection or resource allocation for example can degrade, develop invisible biases or be weaponized or politicized by parties to a conflict. Federico explains that these risks can be compounded by a lack of safety benchmarks for things like underrepresented languages and dialects, which can lead to unpredictable jailbreaks or administrative failures in the field. The episode provides an inside look at pioneering research being carried out by the Icaro Lab, a Rome-based laboratory specialized in AI safety in collaboration with the Sapienza University. The lab focuses on mechanistic interpretability, a technical field dedicated to understanding the internal attention heads and decision-making units of an AI to decipher how it truly processes information. The discussion introduces the concept of Institutional AI, a proposed framework to manage these emerging xeno-behaviors through a governance graph. Rather than relying solely on prompt engineering or model-level alignment, Federico argues for a protocol-level solution that can manage misbehaving agents during inference. The episode is informative for professionals seeking to understand why AI safety must evolve from a localized technical challenge into a global institutional design problem, particularly in regions where traditional governance has broken down. This particular episode moves beyond surface-level AI ethics and safety issues that the humanitarian community has been talking a lot about, to address inference-time vulnerabilities in agentic systems. This is an important topic because as the humanitarian community moves from developing and testing simple chatbots to incorporating autonomous multi-agent systems into humanitarian operations, we face new challenges that can have very serious consequences - making the 'inference layer' a new frontier for humanitarian risk.
    Más Menos
    42 m
  • Zineb Bhaby on NRC's CLEAR Initiative and Building a Digital Backbone for Humanitarian AI
    Mar 10 2026
    Zineb Bhaby, AI Lead at the Norwegian Refugee Council, introduces NRC’s CLEAR (Crisis Learning, Early-warning, Anticipation, and Response) initiative and discusses the critical necessity of data collaboration in the humanitarian sector with Humanitarian AI Today producer Brent Phillips. The CLEAR initiative is a three-year project supported by Twilio that is designed to build a digital "backbone" for humanitarian cooperation that the humanitarian community can collectively maintain and evolve. Zineb stresses that CLEAR’s goal is bring together humanitarian, academic and private sector partners through a consortium to integrate diverse data sources into unified early warning and early action systems, leveraging artificial intelligence and predictive analytics to transform how humanitarian organizations detect, prepare for and respond to crises. Discussing CLEAR and challenges associated with the collection and use of data by aid organizations and the imperative to do better, Zineb nevertheless emphasizes that strict data governance remains a priority to protect the safety and sensitivity of information regarding vulnerable populations. By prioritizing an agile, safety preserving, open-source approach that bridges the gap between available information and field response, the initiative seeks to create a more resilient and unified technological foundation for the entire humanitarian ecosystem.
    Más Menos
    23 m
  • Lukas Borkowski on Building Voice-First Humanitarian AI on a National Scale
    Mar 4 2026
    Voices is a new mini-series from Humanitarian AI Today. In short daily flashpods, Voices passes the mic to guests to learn about new projects, events and advances in artificial intelligence and discuss topics that are important to the humanitarian community. In this flashpod, Lukas Borkowski, Senior Director of Strategic Partnerships at Viamo shares how artificial intelligence can serve the billions of people who remain offline and rely on basic mobile phones. In a conversation with Humanitarian AI Today producer, Brent Phillips, Lukas spotlights the reality that most people in lower income countries live their lives largely offline and disconnected from the benefits of emerging AI applications while at the same time live under mobile network coverage. Lukas describes how Viamo works directly with mobile network operators to negotiate long-term partnerships that enable national-scale, toll‑free hotlines and behavior-change campaigns and he describes how Viamo is rapidly expanding voice-first gen‑AI experiences for use cases like rural health worker hotlines and disaster-preparedness campaigns. He outlines Viamo’s cloud and in‑country server architecture, their use of generative AI and speech technology in local languages, and their specialization in behavior-change communication design that is tailored to specific geographies and demographics. Offering examples from public-health collaborations, he illustrates how voice-based generative AI can support and provide both community members and frontline workers with accessible information, advice and decision support. Touching on broader ecosystem challenges, Lukas highlights the lack of high-quality speech technology for many African and Asian languages and calls for more investment, standardized tooling, and collaboration with aggregators like Viamo rather than fragmented pilots and one-off solutions. He calls for partners who bring clear behavioral objectives and a willingness to deploy imperfect but improving tools, arguing that waiting for perfect technology delays agency for people who urgently need trustworthy information. Looking ahead, he envisions seamless voice experiences where, in a single call, users can learn about services, ask personalized questions, and complete tasks.
    Más Menos
    23 m
Todavía no hay opiniones