Episodios

  • The Moltbook Moment: Human Agency in an Agentic World
    Mar 6 2026

    What happens when AI agents start talking to each other in public, at scale, and we have to figure out how humans fit into that world?

    In this episode of AI-Curious, we explore the “Moltbook moment” through a special live panel recorded at the Summit on Human Agency, convened by the Advanced AI Society (hat tip to Michael Casey and Tricia Wang.) Instead of a standard one-on-one interview, we moderate a wide-ranging conversation with technologists, policy thinkers, and builders working across open-source and decentralized AI. Together, we examine what Moltbook reveals about the future of AI agents, human agency, accountability, regulation, security, and the broader question of how humans and AI can coexist.

    We dig into the tension at the center of this moment: AI can feel both exciting and unsettling at once. This discussion looks beyond the hype and asks what practical guardrails, governance models, and design choices might help us preserve human control as agentic systems become more capable, more autonomous, and more embedded in daily life.

    Because this is a live, multi-guest panel, the format is faster, broader, and more exploratory than usual. We cover everything from AI accountability and security to value alignment, identity, policy, human flourishing, and whether AI could expand human agency rather than diminish it.

    Our guests:

    Michael Casey, Chairman of the Advanced AI Society
    Toufi Saliba — CEO, Hypercycle
    Lauren Roth — Founder, Iris
    Enok Choe — Software Engineer, Meta
    Mary Jesse — CEO and Founder, Acme Brains
    Carole House — Strategic Advisor, The Institute for Digital Integrity
    Wenjing Chu — Senior Director for Technology Strategy, Futurewei Technologies
    Didem Ayturk — Founder, Bindingdots & Sound Echo System

    Key topics we cover:

    • 00:00 — Introduction
    • 01:32 — The core question: how do we preserve human agency as AI develops faster and gains more autonomy
    • 02:25 — Why Moltbook became a useful lens for thinking about AI agents, scale, and emerging risks
    • 07:51 — The first big debate: what about AI agents should make us excited, anxious, or both
    • 11:17 — Security, misuse, and worst-case concerns, from malware and fraud to deeper systemic risks
    • 20:55 — Regulation vs. self-governance: what practical guardrails may actually be realistic in the near term
    • 24:27 — The bigger challenge: how humans and AI might coexist, and what “human flourishing” should mean in that future


    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com

    Más Menos
    33 m
  • Jeff’s Musings on Moltbook, Why it Matters, and Why it (Probably) Won’t End Humanity”
    Feb 26 2026

    What happens when a social network is built for AI agents, not humans, and millions of bots start posting, debating, and “performing” identity in public?

    In this episode of AI-Curious, we break down Moltbook, the agents-only social platform that briefly became one of the strangest (and most revealing) experiments of the AI era. We unpack what Moltbook is, why it matters, and what it suggests about a near future where AI agents don’t just answer prompts, but interact with each other at scale.

    Key topics we cover

    • 00:00 — Why we’re doing a solo episode, and why Moltbook still matters even in “fast AI time”
    • 01:23 — Moltbook 101: a social platform for AI agents, and what “no humans allowed” means in practice
    • 02:56 — The controversy layer: how much was truly agent-generated vs. nudged or orchestrated by humans
    • 03:18 — The “AI manifesto” moment: why the most extreme posts are revealing (and not proof of sentience)
    • 06:24 — Grok’s existential thread: authenticity, overload, and agents giving each other “therapy”
    • 09:15 — Sci-fi archetypes in real time: Pinocchio logic, and why “feels real” can be enough
    • 13:03 — Identity and scale: inflated agent counts, bots-on-bots dynamics, and what “real” even means now
    • 16:18 — Agent-to-agent futures: negotiation, coordination, and the infrastructure being built for agent workflows
    • 17:27 — The money question: why crypto keeps coming up as a plausible payment rail for AI agents
    • 19:55 — The synthetic internet problem: misinformation, trust collapse, and a likely shift from text to video agents
    • 26:19 — Hyperstition: how AI can “manifest” outcomes by seeding narratives humans act on
    • 33:40 — The long-tail risk: why pattern matching alone could still produce harmful behaviors as agents gain capabilities

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com



    Más Menos
    39 m
  • AI Adoption Case Study Masterclass, w/ WCCB’s Krista Snelling & Matthew March
    Feb 19 2026

    What does it take to make AI adoption stick in a high-stakes, heavily regulated industry, without triggering job-loss panic?

    In this episode of AI-Curious, we have a hyper-specific case study of AI adoption. Host Jeff Wilser talks with Krista Snelling (CEO and Chairman) and Matthew March (CIO and EVP) of West Coast Community Bank about their practical playbook for rolling out AI the right way: governance first, culture second, and measurable wins that free up time without cutting headcount.

    Why this is something of a “very special episode”: The story and success of the West Coast Community Bank is something that Jeff knows personally. Jeff was honored to visit WCCB’s headquarters and work with their leadership team on AI culture and AI strategy, helping to transform curiosity into clarity.

    In this podcast for the first time, Jeff peels back the curtain to share the AI and Leadership workshops he conducts for businesses.

    Special thanks to Vistage Chair Richard Bell and the larger Vistage community.

    Guests

    Krista Snelling — CEO and Chairman, West Coast Community Bank

    Matthew March — CIO and EVP, West Coast Community Bank

    Key topics we cover

    • 00:37 — Why we’re sharing this case study and what “curiosity-driven” adoption looks like
    • 06:58 — Bank scope and context: footprint, size, and what makes this implementation notable
    • 10:29 — When AI shifted from “vaporware” to something teams could use right now
    • 15:23 — The banking reality: protecting customer data and operating in a regulated environment
    • 17:43 — Governance first: policies, model risk management, and third-party/vendor risk
    • 23:02 — The “Curiosity Canvas,” the “drudgery dump,” and targeting tedious work for automation
    • 25:14 — Building an AI Working Group across departments and flipping the pyramid
    • 33:51 — Making adoption repeatable: SharePoint collaboration, prompt sharing, Teams channel support
    • 36:24 — A concrete workflow win: extracting data from PDFs to generate letters automatically
    • 39:19 — Another win: scraping hundreds of statements for key data elements in a fraction of the time
    • 42:21 — System conversion regression testing: validating outputs at scale with better traceability
    • 44:35 — Security approach: approved tools, tenant controls, DLP settings, and “what not to use AI for”
    • 49:29 — A hard boundary: avoiding AI for anything that directly impacts financial reporting
    • 52:11 — The culture message: “efficiency, not reduction,” and why that unlocks curiosity
    • 53:02 — Advice for leaders: start small, build momentum, and appoint an internal champion
    • 56:51 — Quick personal use cases: everyday ways they use AI outside the office

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    Vistage Chair Richard Bell:

    https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bell

    West Coast Community Bank:

    https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bell

    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com

    Más Menos
    59 m
  • Deep-Dive Into Agentic Workflows, w/ Cognizant’s Head of AI
    Feb 12 2026

    What happens when software stops just “chatting” and starts acting in the real world, across real workflows, with real consequences?

    In this episode of AI-Curious, the Head of AI at Cognizant goes deep on AI agents and agentic workflows: what they are, why enterprises are investing heavily, and what it actually takes to make agent systems reliable and safe at scale. We unpack what separates an AI agent from a traditional chatbot, why “agency” changes the stakes, and how multi-agent systems can be designed to reduce risk instead of amplifying it.

    We also explore concrete enterprise use cases, including agent hierarchies that coordinate across complex systems (like networks, utilities, and other operations), plus how “agentic process automation” builds on older automation models while adapting to unexpected edge cases. Finally, we zoom out to the future of work: which tasks get augmented first, why disruption is happening faster than most forecasts, and how trust in AI systems may shift over the next several years.

    Guest

    Babak Hodjat — Head of AI at Cognizant; leads AI lab work focused on scaling reliable, trustworthy agent systems; longtime AI builder with deep experience in applied natural language systems.

    Key topics we cover

    • 07:00 — What an AI agent is (and how it differs from a chatbot)
    • 13:03 — State of play: what’s working, what’s not, and why “agent systems must be engineered”
    • 17:00 — A practical multi-agent design pattern across telecom, power, and agriculture
    • 20:28 — Agentifying rigid processes (and handling unforeseen situations)
    • 24:14 — Who should deploy agents, why single “do-everything” agents are risky
    • 26:34 — An open-source starting point for experimenting with multi-agent systems
    • 29:12 — Guardrails: reducing hallucinations, adding redundancy, and safety thresholds
    • 35:29 — Why we should use LLMs for reasoning, not knowledge retrieval
    • 38:15 — The future of work: tasks, jobs, and decision-making roles shifting upward
    • 41:59 — AGI, limitations, and why modular multi-agent systems may matter
    • 44:57 — A prediction: we’ll delegate more than we expect as systems become more trustworthy

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms



    Más Menos
    47 m
  • The CEO of Upwork, Hayden Brown: AI is Creating Jobs, Not Killing Them
    Feb 5 2026

    Is AI quietly creating more work than it’s replacing, and are we measuring the job market the wrong way?

    In this episode of AI-Curious, we talk with the CEO of Upwork, Hayden Brown, about what the platform is seeing across the global freelance economy, and why the “AI is killing jobs” narrative can miss what’s happening at the edges of the market. We also dig into how to adopt AI inside an organization without just “sprinkling fairy dust” on old workflows, and what it takes to make AI rollout a cultural shift, not just a tooling upgrade.

    Guest

    Hayden Brown is the CEO of Upwork, the global work marketplace connecting businesses with freelance talent across knowledge-work categories. We discuss Upwork’s vantage point on hiring trends, the rise of fractional work, and what AI-driven change looks like when companies redesign workflows end-to-end rather than retrofitting existing systems.

    Key topics we cover

    • 03:50 — A global background and why opportunity access shapes the mission
    • 05:27 — The scale of Upwork and why freelancing is a major part of the economy
    • 07:14 — How we approached AI adoption as a structured, company-wide program
    • 08:47 — Early “two-year vision” ideas that reshaped marketing and product workflows
    • 11:34 — Reducing fear: how we framed AI internally, including room for mistakes
    • 16:03 — Building an AI agent experience (and what it changed about job posts)
    • 17:14 — Why “reinventing, not retrofitting” separates AI winners from strugglers
    • 22:24 — Why macroeconomics can explain more than AI in hiring slowdowns
    • 23:01 — The core claim: AI creating more opportunities than it’s destroying
    • 24:05 — Fractionalization: how full-time jobs get broken into AI + human slices
    • 25:09 — A concrete example of humans working alongside AI in production workflows
    • 26:32 — From “prompt engineer” to “AI generalist”: orchestration becomes the ask
    • 28:11 — Why the AI jobs debate is too binary, and what’s getting missed
    • 31:43 — Practical reskilling: embedded experts who train teams while upgrading systems
    • 36:29 — AI’s impact across unexpected categories, including creative work
    • 39:15 — Five-to-ten-year outlook: humans as orchestrators, premium on human skills
    • 43:22 — Career advice for early-career listeners in an AI-shaped job market
    • 45:40 — Real-life AI use: editing, learning, and replacing the blank page problem


    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms

    Más Menos
    49 m
  • How to Make Human-First Tech Decisions, w/ Tech Humanist Kate O’Neill
    Feb 2 2026

    What does “human-first AI” actually look like when you have to make decisions under pressure, hit numbers, and keep trust intact?

    In AI-Curious, we talk with Kate O’Neill — “the Tech Humanist” and author of What Matters Next — about how leaders can adopt AI in ways that strengthen human outcomes instead of quietly eroding culture, morale, and customer experience. We dig into why so many AI initiatives fail for non-technical reasons, how to think beyond short-term wins, and why prompting is less “prompt engineering” and more like learning to delegate clearly.

    Key topics:

    Prompting as delegation: defining success conditions, constraints, and what “good” means (00:00)

    Kate’s early work at Netflix and what personalization taught her about human impact (04:45)

    What “human-unfriendly” tech looks like in practice, from subtle friction to scaled harm (09:28)

    The Amazon Go example: how small design constraints can scale into behavior change over time (11:19)

    AI in the workplace: why “cut, cut, cut” is shortsighted, and what gets lost when you optimize only for this quarter (14:14)

    Trust and readiness: why reskilling fails when people don’t believe there’s a future for them (16:45)

    The now–next continuum: making decisions that “age well,” not just decisions that look good immediately (17:29)

    Preferred vs. probable futures: identifying the delta and acting to move outcomes toward what you actually want (19:22)

    “Chatting with Einstein”: using AI to become smarter vs. outsourcing thinking (22:13)

    Why most AI pilots fail: human and organizational readiness, not the tech itself (24:02)

    Questions → partial answers → insights: building an organizational muscle that compounds (28:21)

    Bankable foresight: why Netflix invested early in what became streaming (30:37)

    Trend watch: the pivot from LLM hype to agentic AI, and why prompting still matters (38:58)

    Sycophancy and “best self” prompting: getting better outputs by being explicit and structured (41:01)

    Probability vs. meaning: what LLMs can do well, and what they can’t replace (44:45)

    A fun real-world workflow: Kate’s Notion + AI system for hotel coffee-maker recon (46:26)

    Career advice in the AI era: adaptability, “human skills,” and shifting definitions of value (49:21)

    Guest
    Kate O’Neill is a tech humanist, founder and CEO of KO Insights, and the author of What Matters Next: A Leader’s Guide to Making Human-Friendly Tech Decisions in a World That’s Moving Too Fast. She advises organizations on improving human experience at scale while making emerging technology commercially and operationally real.

    KO Insights:

    https://www.koinsights.com/about-kate/


    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms

    Más Menos
    53 m
  • Deep-dive on AI and Creativity, with The Man Designing the World’s Creative Tools (Eric Snowden, Adobe’s SVP of Design)
    Jan 22 2026

    What happens when the world’s most-used creative tools get smarter — and creators worry they’re losing the wheel?


    In this episode of AI-Curious, we talk with Eric Snowden, Senior Vice President of Design at Adobe, about how Adobe is weaving AI into Photoshop, Lightroom, Acrobat, and beyond — while trying to keep the tools respectful of craft, muscle memory, and the human spark. We dig into the bigger question beneath the feature releases: as AI accelerates creation, do we get more powerful… or do we become passengers approving machine outputs?

    Key topics:

    Two buckets of Adobe AI: upgrading existing tools vs building net-new AI products (00:04:55)

    Photoshop “harmonize,” Lightroom auto culling, and Acrobat “PDF spaces” (00:04:55)

    Why PDFs are a bottleneck for knowledge work, and how Acrobat can help you “get 80% of the way there” (00:07:18)

    Project Graph explained: node-based workflows that stitch together building blocks like Firefly and Photoshop (00:08:25)

    A concrete Project Graph example: 2D product photo → 3D asset → generated ad → multiple animated versions, with the user still in control (00:09:42)

    Time saved vs creating more: how Firefly helped Adobe teams move faster and “make more things,” including “like 40% improvement” on time-to-market (00:14:28)

    A Max London demo that captures the core principle: “his hand was on the wheel” (00:17:45)

    “Quiet AI” in practice: enhanced audio in Adobe Podcast that can make phone-recorded audio sound studio-ready (00:19:57)

    Respecting creative muscle memory: why “subtraction is not always good,” and why Adobe adds new workflows without removing old ones (00:24:43)

    Firefly’s principles: licensed content, knowing what’s in the model, and compensating creators (00:29:29)

    Content authenticity as a “nutritional label for AI”: immutable metadata describing what was done to an image (00:30:15)

    The self-driving car analogy: creators need to be able to “grab the wheel” and tweak under the hood (00:36:00)

    Vibe coding inside Adobe: designers using Cursor and internal tooling to build prototypes that hit real APIs (00:39:18)

    A leadership playbook for AI adoption: focus the OKRs, make training practical, show examples, remove roadblocks (00:44:19)

    The future of AI creative tools: communicating intent beyond text prompts, and shifting from “look what I do with AI” to storytelling (00:46:36)


    Guest
    Eric Snowden is the Senior Vice President of Design at Adobe, overseeing design and the AI-infused creative tools used by millions of creators.

    Mentioned in this conversation
    Adobe Firefly

    Project Graph (node-based creative workflow building)

    Enhanced audio in Adobe Podcast

    Content authenticity / provenance metadata (“nutritional label” concept)

    Cursor and “vibe coding” for rapid prototyping inside enterprise teams

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms

    Más Menos
    50 m
  • AI Broke the Web’s Social Contract, w/ Tony Stubblebine, CEO of Medium
    Jan 15 2026

    What happens when AI can “read the whole internet” but the internet stops volunteering its best work?

    In this episode of AI-Curious, we talk with Tony Stubblebine, CEO of Medium, about what he calls AI’s “broken social contract” with the web, and why the next era may be less about a “dead internet” and more about a dead public internet. We unpack the incentives that made the open web thrive, how AI search summaries change the traffic bargain, and what a realistic path forward could look like for publishers, platforms, and writers.

    Key topics we cover:

    -Why generative AI broke the web’s old value exchange, and what “social contract” means in practical terms (00:03:24)

    -Tony’s “three Cs” framework for a healthier AI ecosystem: consent, credit, compensation (00:05:13)

    -The publisher response spectrum: blocking crawlers, fighting spam/slop, and what happens if collaboration fails (00:04:25)

    -The shift from public publishing to private communities (Discords, group chats, newsletters) and what drives that retreat (00:07:06)

    -How AI search summaries can cut the incentive to publish publicly by reducing click-through and traffic (00:08:21)

    -Why AI systems still depend on human source material, and what happens when the best content moves behind “closed doors” (00:09:27)

    -Cloudflare’s role in the escalating crawler arms race, including large-scale blocking and other countermeasures (00:16:48)

    -A proposed solution: an internet-wide licensing standard instead of one-off deals, including the Really Simple Licensing (RSL) approach (00:18:07)

    -What “paying creators” could look like in practice, including opt-in/opt-out controls and better transparency for writers (00:19:33)

    -“Dead internet theory” vs. the more plausible outcome: a dead public internet, and why Tony is cautiously optimistic about a new equilibrium (00:23:06)

    -The “second wave” of AI: moving from replacement to augmentation, and how Medium is thinking about AI tools that support flow state rather than write for you (00:26:03)

    -Why AI detectors don’t solve the problem, and why Medium focuses on quality and reader value as the enforceable standard (00:34:04)

    -Advice for writers: the difference between the creator economy and the “expert economy,” and what’s likely to be more sustainable (00:38:43)

    -Tony’s prediction: “trust but verify” becomes the balance point, and the web finds an equilibrium because AI can’t function without public sources (00:43:27)

    Guest
    Tony Stubblebine is the CEO of Medium and a leading voice on the evolving relationship between generative AI and the open web.
    Mentioned in this conversation
    Medium’s framework: Consent, Credit, Compensation

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms

    Más Menos
    47 m