Episodios

  • S3E3 - Mark Boost - "Sovereign AI around the World"
    Apr 1 2026

    As AI adoption accelerates, a new question is emerging at the heart of global technology strategy: who controls the intelligence?

    In this episode of Futurise, Rob Price is joined by Mark Boost, CEO of Civo, to explore the rise of sovereign AI — and why governments and organisations are rethinking where their data lives, how it’s processed, and who ultimately has access to it.

    With a focus on UK-operated data centres, and perspectives spanning the US, Germany, and India, they unpack what sovereignty really means in an AI-driven world. Is this about regulation, resilience, or competitive advantage? And can organisations balance global innovation with local control?

    They also dig into the practical realities:

    • What does “sovereign AI” actually look like in practice?
    • How are data centre strategies evolving to support it?
    • Where do cloud providers fit — or clash — with sovereignty ambitions?
    • What are the trade-offs between control, cost, and capability?
    • And how should leaders think about risk, compliance, and long-term AI strategy?

    This isn’t just a policy discussion — it’s a strategic inflection point for AI adoption.

    For founders, investors, and enterprise leaders, the question is no longer whether to adopt AI — but how to do it on your own terms.



    Más Menos
    33 m
  • S3E2 - Claudine Adeyemi-Adams - Case Management with Voice AI: How AI Startups Are Transforming Frontline Services
    Mar 11 2026

    In this episode, Rob Price speaks with Claudine Adeyemi-Adams, founder and CEO of Earlybird AI, about how voice AI and machine learning are transforming the way organisations handle cases, support clients, and gather real insight from conversations.

    Claudine’s journey from award-winning lawyer to AI startup founder highlights how new technology is reshaping traditional industries. Earlybird AI is building voice-driven tools that help employment support organisations, charities, and service providers understand the people they serve, automate case handling workflows, and improve outcomes using real-time insights.

    Rob and Claudine explore the rise of voice AI, the challenges of building responsible AI systems that interact directly with people, and what founders learn when turning a mission-driven idea into a scalable AI business.

    The conversation also touches on the realities of the life as an innovative SME, product-market fit in AI startups, and how voice-based intelligence could change the way organisations listen to and support their communities.

    Topics covered in this episode:

    • Voice AI and the future of case management
    • Building an AI startup from a legal background
    • AI for employment support and social impact organisations
    • Responsible AI and human-centred design
    • Selling and scaling an AI business
    • Why voice data could become one of the most valuable sources of insight

    Links

    Earlybird website: https://www.getearlybird.ai
    Futuria website: https://futuria.ai

    Más Menos
    33 m
  • S3E1 - Over the Garden Fence: What Gardening Teaches Us About AI, Expertise and Knowledge Transfer – with Steve Bustin
    Feb 18 2026

    In this episode of Futurise, Rob Price is joined by Steve Bustin, Chair of the Board of Trustees for the Hardy Plant Society, for a conversation that begins in the garden — and ends in the boardroom.

    “Over the garden fence” is how knowledge used to be shared: informally, experientially, and across generations. Gardening expertise — like much business expertise — is rarely written as technical documentation. It is contextual, tacit, and learned through experience.

    As organisations adopt AI and agentic systems, a similar challenge emerges: how do we translate deep domain knowledge into language that AI systems can understand — without losing meaning in the process?

    This episode explores:

    • How expert knowledge is traditionally passed down between generations

    • Why tacit expertise is difficult to encode into AI systems

    • The language gap between business specialists and AI technologists

    • What agentic AI might mean for capturing and applying domain expertise

    • Why successful AI adoption depends as much on terminology as technology

    By deliberately using language that would resonate with gardeners rather than AI engineers, this conversation highlights a wider leadership lesson: AI systems only become valuable when they can engage meaningfully with real-world expertise.

    If you’re a founder, investor, or executive navigating AI adoption, this episode offers a fresh perspective on knowledge transfer, AI leadership, and the future of artificial intelligence in practice.

    Comments are open — where do you see gaps between business expertise and AI terminology in your organisation?

    Subscribe to Futurise to hear first about conversations on Agentic AI, AI leadership, responsible AI development, AI governance, and the future of artificial intelligence.


    This episode is dedicated to Jeune Price (1941-2023), passionate gardener and long time member of the Hardy Plant Society.

    Más Menos
    30 m
  • S2E21 - Building Safer Agentic AI: AI Safety, Alignment & Governance with Nell Watson
    Jan 14 2026

    Agentic AI is evolving rapidly — moving from copilots and automation tools to autonomous systems that can plan, decide, and act over time. As agentic systems become more capable, questions around AI safety, alignment, and governance become critical for founders, investors, and enterprise leaders.

    In this special episode of Season 2, Rob Price speaks with Nell Watson — AI ethics researcher, author, and Chair of the Safer Agentic AI Safety Experts Focus Group at IEEE — about what building safer agentic AI means in practice.

    The discussion explores:

    • How agentic AI systems are being developed and deployed today

    • Where organisations underestimate AI safety and alignment risks

    • What responsible AI governance looks like for agentic systems

    • How principles such as alignment, epistemic hygiene, and bounded goals translate into real products

    • Why leaders should engage with AI safety before regulation forces the issue

    As the future of AI shifts toward increasingly autonomous and agentic architectures, what does “safe enough” really mean — and who decides?

    If you’re building, funding, or adopting agentic AI, this conversation will help you think more clearly about responsible AI development and long-term trust.

    Subscribe to Futurise for conversations on agentic AI, AI leadership, responsible AI, and the future of artificial intelligence.


    Futurise explores Agentic AI, AI leadership, Responsible AI development, AI governance, and the future of artificial intelligence for founders, investors, and enterprise leaders.



    Más Menos
    31 m
  • S2E20 - AI Leadership in a Fast-Moving Market: Acting Safely Without Waiting for Certainty – with Michael Wade
    Dec 16 2025

    In this episode of Futurise, Rob Price speaks with Professor Michael Wade about how leaders can take meaningful, responsible action on AI without waiting for certainty — and without exposing their organisations to unnecessary risk.

    As AI technology evolves rapidly — from generative AI to agentic systems — many organisations struggle with how to develop an effective AI strategy while markets are still shifting. The conversation explores what responsible AI leadership looks like in a fast-moving environment, and why deliberate action is often safer than paralysis.

    In this episode, we discuss:

    • Why traditional AI strategy frameworks break down in rapidly evolving markets

    • How leaders can act on AI without locking themselves into the wrong decisions

    • The difference between safe progress and reckless acceleration

    • What “beyond agentic AI” might mean for enterprise organisations

    • How to assess AI market risk as conditions change

    Professor Wade advises senior leaders on digital transformation and AI as Professor of Strategy and Digital at IMD Business School, helping organisations balance speed, responsibility, and long-term value creation.

    If you’re a founder, investor, or executive navigating AI adoption, this episode offers a practical lens on AI governance, AI risk management, and leadership in the future of artificial intelligence.

    Subscribe to Futurise for conversations on Agentic AI, AI leadership, responsible AI development, and the future of AI.


    Futurise explores Agentic AI, AI leadership, Responsible AI development, AI governance, and the future of artificial intelligence for founders, investors, and enterprise leaders.

    Más Menos
    33 m
  • S2E19 - Sovereign AI and Digital Sovereignty in the UK: Michael Herron, CEO of Atos UK&I
    Dec 10 2025

    In Episode 19 of Season 2, Rob Price speaks with Michael Herron, CEO of Atos UK & Ireland, about sovereign AI, digital sovereignty, and what these shifts mean for the UK’s AI infrastructure and enterprise strategy.

    As governments and large organisations rethink control over data, compute, and AI systems, sovereign AI is becoming a strategic priority. Michael discusses Atos’ recent investments in a Sovereign Orchestration Hub, a Digital Agentic Centre, and a Sovereign Digital Enablement Centre — initiatives aligned with UK Government commitments around AI Growth Zones and national AI capability.

    The conversation explores:

    • What sovereign AI means in practice for enterprises and public sector organisations

    • Why digital sovereignty is rising on the executive agenda

    • How agentic AI systems fit into sovereign AI infrastructure

    • The strategic implications of AI Growth Zones in the UK

    • How capability development and future careers must evolve in an agentic AI economy

    As AI adoption moves from experimentation to infrastructure-level deployment, questions of resilience, governance, and sovereignty are becoming as important as speed and innovation.

    If you’re a leader navigating enterprise AI adoption, AI governance, or national AI strategy, this episode offers insight into how sovereign AI may reshape the future of artificial intelligence in the UK and beyond.

    Relevant links:
    https://atos.net/en-gb/united-kingdom
    https://atos.net/en-gb/lp/uki-digital-sovereignty
    https://atos.net/en-gb/2025/press-releases-en-gb_2025_10_20/atos-to-launch-new-sovereign-and-sovereign-ai-centres-across-the-uk

    Comments are open — how is your organisation approaching sovereign AI and digital sovereignty?

    Subscribe to Futurise to hear first about conversations on Agentic AI, AI leadership, responsible AI development, AI governance, and the future of artificial intelligence.


    Más Menos
    31 m
  • Do Teams Still Work the Same in an Agentic AI World?
    Nov 26 2025

    #AgenticAI isn’t just changing tools — it’s changing how work gets done.


    In this episode of Futurise, Rob Price explores whether existing team design models still hold in an agentic world, and what leaders need to change as AI agents become part of everyday delivery.


    Featuring Matthew Skelton, co-author of Team Topologies and CEO/CTO of Conflux.


    • Do Team Topologies principles still apply with AI agents in the workflow?

    • What breaks first when teams stay “human-only” in design

    • How to think about accountability, flow, and boundaries in agentic teams

    • What organisations should start changing now, not later.

    Matthew Skelton is co-author of the award-winning Team Topologies, founder and CEO/CTO of Conflux, and a leading voice in modern organisational design.

    • Matthew Skelton: matthewskelton.com

    • Conflux: confluxhq.com/outcomes

    • Team Topologies: teamtopologies.com/scale

    Comments are open — how do you think teams should evolve in an agentic world?

    If AI agents are doing more of the work, what should teams actually be responsible for? Interested to hear how others are thinking about this.

    Más Menos
    35 m
  • AI Is Silencing Languages — Here’s How That Changes Everything
    Nov 12 2025

    AI systems are being built for a narrow slice of humanity.
    In this episode of Futurise, Rob Price explores why endangered language models matter—not just for inclusion, but for the future of agentic AI itself.
    Featuring Anna Mae Yu Lamentillo, founder and Chief Futures Officer of NightOwl AI.

    In this conversation, we discuss:

    • Why most AI models exclude entire cultures and languages

    • What endangered language models really are

    • The implications for accessibility, bias, and agentic systems

    • Lessons for organisations building bespoke and domain-specific AI models.

    Anna Mae Yu Lamentillo is the founder of NightOwl AI, a mission-driven company using AI to preserve endangered languages and combat digital exclusion. Her work focuses on ensuring AI reflects the full spectrum of human culture—not just the dominant few.

    Anna Mae Lamentillo: https://www.annamaeyulamentillo.com
    NightOwl AI: https://www.thenightowl.ai

    Comments are open — should AI systems be required to support minority and endangered languages?

    If AI agents can’t understand large parts of the world’s population, are they really fit for purpose? Curious how others are thinking about inclusion in agentic systems.

    Please subscribe to Futurise to hear first about future episodes.


    Más Menos
    24 m