Episodios

  • Dr Piercosma Bisconti On The Social Frontiers Of Generative AI
    Mar 16 2026

    In this episode, hosts Michael Mainelli (London) and Adam Leon Smith welcome Piercosma Bisconti, dialling in from Rome, for a fresh European perspective on the evolving ethics and governance of generative AI. With a background in philosophy, robotics, and global politics, Piercosma shares his surprising shift from academic research to actively shaping EU and international AI standards, including his work with DEXAI – Artificial Ethics.

    The conversation dives into how ChatGPT's 2022 launch changed everything, suddenly bringing AI directly into human social spaces in ways earlier ethical frameworks never fully anticipated. Piercosma explores the rise of more interconnected AI systems and the surprising new risks that emerge when multiple models interact, collaborate, or even compete in real-world environments. Drawing on philosophy and systems thinking, he reflects on what this means for society, especially how always-agreeable AI might quietly reshape human relationships, emotional resilience, and social skills in the years ahead. Expect thoughtful insights on where standards and governance fit in, the limits of current testing approaches, and why the biggest changes may be more social than technological.

    A fascinating, big-picture discussion that asks: as AI becomes part of everyday social life, how do we keep our humanity intact? Tune in for Piercosma's unique blend of deep thinking and practical standards experience.

    Más Menos
    45 m
  • Dr Christine Chow On Why AI Standards Matter To Investors
    Feb 18 2026

    In this second episode of the AI Standards Stack Podcast, guest Dr Christine Chow joins hosts Professor Michael Mainelli and Adam Leon Smith to discuss responsible AI governance from an investor’s perspective. Christine, a long-time investment professional and early advocate for responsible AI since 2012, shares insights drawn from her pioneering work, including leading Federated Hermes’ 2019 industry-first investor expectations on responsible AI and data governance.

    The conversation centers on why robust data governance forms the foundation of effective AI governance, covering data provenance, bias in raw, model and synthetic data, transparency, explainability and accountability. It explores practical challenges across evolving AI paradigms, from efficiency tools to generative, agentic, multimodal and embodied systems, including use-case identification, prompt engineering, meaningful human-in-the-loop oversight, board-level engagement, and societal risks of over-reliance such as impacts on mental health, confidence and critical thinking. The episode examines the fragmented global standards landscape (EU AI Act risk categories, NIST voluntary frameworks, ISO 42001), investor approaches to company engagement, environmental concerns around AI infrastructure, tensions between free speech and content guardrails, cultural complexities in human rights, and the push for concrete implementation guidance to balance innovation with safety and societal well-being.

    Más Menos
    41 m
  • Maury Shenk On AI With A Worldview
    Feb 4 2026

    In this inaugural episode, guest Maury Shenk, CEO and Co-Founder of Ordinary Wisdom will be joined by regular hosts Professor Michael Mainelli and Adam Leon Smith to talk about how he is seeking to develop, through Ordinary Wisdom, innovative ways of shaping AI behaviour. Drawing from his background as a technology lawyer, entrepreneur, investor, and AI enthusiast, Shenk shares his journey into the field and his risk-oriented perspective on AI's potential harms, from autonomous weapons and social disruption to geopolitical tensions, while remaining optimistic about its transformative power as a general-purpose technology akin to electricity or the printing press.

    The discussion examines whether AI can be imbued with something akin to a "conscience" or selected worldview, moving beyond top-down commands from tech giants toward approaches informed by human values, social consensus, and rigorous testing. The episode sets the stage for a deeper exploration of how standards and ethics can guide AI toward beneficial outcomes in a rapidly changing world.

    Hosts:

    Professor Michael Mainelli: Director at Z/Yen Group, the City of London’s leading think-tank on finance and technology. Former Lord Mayor of London (2023–2024), Sheriff of the City of London (2019–2021), and President of the London Chamber of Commerce and Industry.

    Adam Leon Smith: Expert in AI regulation and technical standards. Chair of the AIQI Consortium. Deputy Chair of the UK national AI standards committee. Project leader for AI Act-related standards in CEN/CENELEC JTC 21.

    Más Menos
    37 m