• 344: Can Decentralized AI Fix Banking? Crypto, Brain OS, and the Future of Finance with Paolo Ardoino, Tether CEO
    Jul 14 2025

    Paolo Ardoino is the CEO of Tether, the company behind the ticker symbol USDT, the world’s largest stablecoin. He also serves as CTO of the cryptocurrency exchange Bitfinex and co-founded Holepunch, a platform for peer-to-peer applications. Ardoino holds a Master’s in Computer Science from the University of Genoa and is one of the most visible proponents of improving the banking system with cryptocurrencies.


    In this conversation, we discuss:

    • How Paolo built the infrastructure behind the world’s most used stablecoin and why he believes math, not politics, should govern money

    • Why he believes Bitcoin is the first form of money governed by math, and how that math could reshape global finance

    • How USDT became the most used digital dollar by solving practical challenges in regions with limited banking access

    • Why Paolo believes traditional finance is broken and how decentralized tech offers a more inclusive infrastructure

    • The case for brain–computer interfaces and why Paolo sees human and AI symbiosis as the next frontier of intelligence

    • Why he’s optimistic about AI’s role in society, even as he argues that long term AI safety might be an illusion


    Resources:

    • Subscribe to the AI & The Future of Work Newsletter
    • Connect with Paolo on LinkedIn or on X
    • AI fun fact article
    • On How to Ensure the AI and Workplace Technologies Align with Civil Rights Laws

    Más Menos
    44 m
  • 343: Can AI make anyone a developer? The changing role of coders with Kyle Daigle, GitHub COO
    Jul 7 2025

    Kyle Daigle is the Chief Operating Officer at GitHub, the world’s largest host of source code with more than 100 million developers and 420 million repositories. He joined GitHub in 2013 and later served as VP of Strategy and Chief of Staff to the CEO, playing a key role in the company’s 2018 acquisition by Microsoft. Kyle is also the public face of GitHub Copilot, the AI coding assistant launched in 2021 that now helps over 15 million users. Earlier in his career, he was a partner at Digitalworkbox and VP of Product Development at Geezeo.

    In this conversation, we discuss:

    • Kyle’s journey from studying fine arts to leading operations at the world’s largest code platform
    • Why GitHub Copilot is about freeing developers to focus on creativity and solving meaningful problems
    • What it means to bring pragmatism into AI development and why usefulness always wins over hype
    • How AI is lowering the barriers to software creation while keeping humans at the center of accountability
    • The responsibility of platforms like GitHub to protect users from flawed code and teach safe coding by design
    • Kyle’s vision for “ambient AI” and why the future should feel personal, context-aware, and privacy-conscious

    Resources:

    • Subscribe to the AI & The Future of Work Newsletter
    • Connect with Kyle on LinkedIn
    • AI fun fact article
    • On How to Discuss Regulation for LLMs and Legal Advice for Entrepreneurs
    • Past episodes mentioned in this conversation: [With Patty Hatter, tech exec/board member/advisor] - On the best advice for women in technology
    Más Menos
    46 m
  • 342: From Reaction to Reflection: Philosopher Anders Indset on AI, Consciousness, and the Path to the Singularity
    Jun 30 2025

    Anders Indset is a Norwegian philosopher, author, tech investor, and former Olympic handball player often called “the business philosopher.” He is the founder of the Global Institute of Leadership and Technology and chairman of Njordis Group, a venture firm focused on the intersection of humanity and exponential technologies. Anders has invested in deep tech companies like Terra Quantum and launched The Quantum Economy Alliance to explore the future of innovation. He brings a unique mix of philosophical insight and real-world experience to today’s conversation.

    In this conversation, we discuss:

    • Why Anders believes our economy is society’s “operating system” and how AI might destabilize or enhance it

    • How we’ve built an economy driven by reaction rather than reflection and what it takes to shift toward more thoughtful progress

    • What the “final narcissistic injury” means for humanity as we face the rise of superintelligence

    • Why it's naive to separate economy from ecology and how the concept of the quantum economy offers a new way to align hyper-efficiency with sustainability

    • The difference between artificial general intelligence and artificial human intelligence and why Anders argues for enhancing ourselves before trying to replace us

    • What the technological singularity really is, why it's misunderstood, and how Anders thinks we should prepare for it


    Resources:

    • Subscribe to the AI & The Future of Work Newsletter
    • Connect with Anders on LinkedIn or on his website
    • Explore Anders’ published work, including “The Quantum Economy”
    • AI fun fact article
    • On How to Optimize Hiring Processes and Celebrate Cognitive Diversity
    Más Menos
    45 m
  • AI and Safety: How Responsible Tech Leaders Build Trustworthy Systems (National Safety Month Special)
    Jun 26 2025

    In honor of National Safety Month, this special compilation episode of AI and the Future of Work brings together powerful conversations with four thought leaders focused on designing AI systems that protect users, prevent harm, and promote trust.

    Featuring past guests:

    • Silvio Savarese (Executive Vice President and Chief Scientist, Salesforce) -Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/15548310
    • Navindra Yadav (Co-founder & CEO, Theom) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/12370356
    • Eric Siegel (CEO, Gooder AI & Author ) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14464391
    • Ben Kus (CTO, Box) - Listen to the full conversation here: https://www.buzzsprout.com/520474/episodes/14789034


    What You’ll Learn:

    • What it means to design AI with safety, transparency, and human oversight in mind
    • How leading enterprises approach responsible AI development at scale
    • Why data privacy and permissions are critical to safe AI deployment
    • How to detect and mitigate bias in predictive models
    • Why responsible AI requires balancing speed with long-term impact
    • How trust, explainability, and compliance shape the future of enterprise AI

    Resources

    • Subscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe

    Other special compilation episodes

    • Ethical AI in Hiring: How to Stay Compliant While Building a Fairer Future of Work (HR Day Special Episode)
    • Data Privacy Day Special Episode: AI, Deepfakes & The Future of Trust
    • The Future of AI Ethics Special: Perspectives from Women Leaders in AI on Bias, Accountability & Trust
    • World Health Day Special: How AI Is Making Healthcare Smarter, Cheaper, and Kinder
    Más Menos
    31 m
  • 341: How AI is Changing Education: The Future of Accessible Learning with Marni Baker Stein, Coursera CCO
    Jun 23 2025

    Marni Baker Stein is the Chief Content Officer at Coursera, the global learning platform with over 175 million learners and partnerships across 6,200 campuses, businesses, and governments. She leads Coursera’s content and credential strategy and manages global partner relationships. Before joining Coursera, Marni was Chief Academic Officer and Provost at Western Governors University, where she oversaw programs for more than 135,000 students. She has also held leadership roles focused on access, student success, and program design at the University of Texas, Columbia University, and the University of Pennsylvania. Marni earned her PhD in Educational Leadership from the University of Pennsylvania.


    In this conversation, we discuss:

    • How AI is shifting education from one-size-fits-all to personalized, contextualized learning tailored to each student
    • Why microcredentials and stackable learning are replacing traditional degrees as the new path for lifelong learners
    • The role of educators in the AI era and why they should be part of the solution, not sidelined by automation
    • What it means for universities to stay relevant as learning becomes more modular, flexible, and job-aligned
    • Why GenAI is fueling demand for both technical skills and enduring human abilities like critical thinking and communication
    • How tools like AI tutors, instant translations, and proctoring are democratizing access and preserving integrity at scale


    Resources:

    • Subscribe to the AI & The Future of Work Newsletter
    • Connect with Marni on LinkedIn
    • AI fun fact article
    • On How To Identify The Power of Product and Listening to Your Customers.
    • Past episodes mentioned in this conversation:
      • [With Dave Marchick, Dean of the Kogod School of Business] - On How AI is Changing Academia
      • [Chris Caren, Turnitin CEO] - On using AI to prevent students from cheating plus lessons for leaders on innovation and team culture
    Más Menos
    32 m
  • 340: Critical Thinking over Code: Tess Posner, AI4ALL CEO, on Raising Responsible AI Leaders
    Jun 16 2025

    Tess Posner is the CEO and founding leader of AI4ALL, a nonprofit that works to ensure the next generation of AI leaders is diverse and well-quipped to innovate. Since joining in 2017, she has focused on embedding ethics, responsibility, and real-world impact into AI education. Her work connects students from underrepresented backgrounds to hands-on projects and mentorships that prepare them to lead in tech. Beyond her role at AI4ALL, Tess is a musician whose 2023 EP Alchemy has over 600,000 streams on Spotify. She was named a 2020 Brilliant Woman in AI Ethics Hall of Fame Honoree and holds degrees from St. John’s University and Columbia University.

    In this conversation, we discuss:

    • Why AI literacy is becoming essential for everyone, from casual users to future developers
    • The role of project-based learning in helping students see the real-world impact of AI
    • What it takes to expand AI access for underrepresented communities
    • How AI can either reinforce bias or drive real change, depending on who’s leading its development
    • Why schools should stop penalizing AI use and instead teach students to use it with curiosity and responsibility
    • Tess’s views on balancing optimism and caution in the development of AI tools

    Resources:

    • Subscribe to the AI & The Future of Work Newsletter
    • Connect with Tess on LinkedIn or learn more about AI4ALL
    • AI fun fact article
    • On How To Build and Activate a Powerful Network
    • Past episodes mentioned in this conversation:
      • [With Tess in 2020] - About what leaders do in a crisis
      • [With Tess in 2019] - About how to mitigate AI bias and hiring best practices
      • [With Chris Caren, Turnitin CEO] - On Using AI to Prevent Students from Cheating
      • [With Marcus "Bellringer" Bell] - On Creating North America’s First AI Artist
    Más Menos
    43 m
  • 339: AI Anxiety and Burnout: Brian Elliott, Work Forward CEO, on Building Trust in the Workplace
    Jun 9 2025

    Brian Elliott is one of the most recognized future of work thought leaders and the CEO of Work Forward, where he advises senior leaders on how to build better organizations. A former senior executive at Slack, Brian is also the bestselling author of How the Future Works: Leading Flexible Teams to Do the Best Work of Their Lives. His insights have been published in Harvard Business Review and Fortune, and cited in Time, Bloomberg, CNBC, The Economist, and Forbes. He holds a BA in Math and Economics from Northwestern and an MBA from Harvard.

    In this conversation, we discuss:

    • Brian Elliott’s leadership journey from Google and Slack to founding Work Forward and advising companies on building healthier workplace cultures.
    • Why alignment, accountability, and shared purpose matter more than hustle culture in scaling organizations effectively.
    • The hidden risks of AI at work, including why employees often use it in secret out of fear of punishment or judgment.
    • The growing tension between executives and employees in an era of midnight layoffs, return-to-office mandates, and AI-induced anxiety.
    • How progressive leaders can create space for experimentation with AI and lead with fallibility instead of fear.
    • Why the future of work depends on creating space for learning, building trust, and valuing human craftsmanship in an AI-powered world.

    Resources:

    • Subscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe
    • Connect with Brian on LinkedIn: https://www.linkedin.com/in/belliott/
    • AI fun fact article: https://ashugarg.substack.com/p/nvidias-ai-factory-bet
    • On How To Deliver IT Service To The Legal Industry: https://podcasts.apple.com/us/podcast/jim-mckenna-serial-cio-and-legaltech-expert-discusses/id1476885647?i=1000624398232
    Más Menos
    37 m
  • 338: From Extraction to Understanding: Martin Goodson, CEO of Evolution AI, on Why AGI Is The Wrong Goal
    Jun 2 2025

    Dr. Martin Goodson is the founder and CEO of Evolution AI, a company he launched in 2012 to apply deep learning to optical character recognition (OCR). The company has received one of the largest AI R&D grants ever awarded by the UK government, along with investment from First Minute Capital. A former scientific researcher at Oxford University, Martin has led AI research across several organizations and was elected Chair of the Data Science and AI Section of the Royal Statistical Society in 2019.


    In this conversation, we discuss:

    • Martin Goodson’s journey from researching biological data to founding Evolution AI and pioneering deep learning for document understanding.
    • Why traditional OCR missed the mark, and how combining visual and linguistic context unlocked a new frontier in document intelligence.
    • The evolution from data extraction to true financial analysis, and why domain knowledge is essential for reading statements like income reports.
    • The risks of LLM hallucinations, especially with numerical data, and why accuracy still requires combining techniques across model types.
    • What Martin believes intelligence really is, and why language alone may be the wrong benchmark for AGI.
    • Why recreating human intelligence shouldn’t be the goal of AI research, and how we can build systems that support, not mimic, human thinking.


    Resources:

    • Subscribe to the AI & The Future of Work Newsletter
    • Connect with Martin on LinkedIn
    • Check out the YouTube channel of the London Machine Learning Meetup
    • AI fun fact article
    • On How to Ovecome Imposter Syndrome
    • Past episodes mentioned:
      • On Why doing Taxes is like finding the Best Route on a Map with Daniel Marcous
      • On Making AI Smarter Without Harming Humans with Peter Voss
    Más Menos
    36 m