Episodios

  • When Everyone Uses AI, What’s Real Anymore?
    Jan 14 2026

    As AI shows up everywhere, something shifts, and it becomes harder to tell what’s human and what’s generated.

    In this episode, Jessica and Kimberly unpack how AI-driven convenience is reshaping education, relationships, identity, and even big systems (like markets and healthcare). They explore signaling, semiotics, and why “perfect” content can feel thin or unreal, and end with small ways to choose more human signals in a noisy world.

    Bonus: If you want to see how this episode ended, tune in on YouTube for a few unfiltered bloopers at the end: https://www.youtube.com/@womentalkinboutai

    Topics we cover in this episode:

    • AI as an invisible intermediary
    • Finding the signal in the noise
    • Higher ed reality check
    • Why AI feels “safer” than people
    • Semiotics
    • The “uncanny valley” of social media
    • AI for therapy + parenting support
    • Cultural swing back

    Not-a-Sponsor Bloopers (YouTube only): Stick around on YouTube for our end-of-episode bloopers, featuring our favorite products that are definitely not sponsoring this show (yet). https://www.youtube.com/@womentalkinboutai



    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    1 h y 10 m
  • Rest, Resistance, and the Protestant Work Ethic (in the Age of AI)
    Jan 7 2026

    We’re kicking off 2026 with our most personal episode yet.

    This conversation wasn’t planned. We sat down intending to talk about what comes next for the show, and instead found ourselves in a deeper discussion about work, burnout, ambition, and what it means to live in a moment where AI is rapidly reshaping labor, identity, and trust.

    In this episode:

    • Why “work is sacred” feels harder to believe and harder to let go of
    • Burnout, hustle culture, and the cognitive dissonance of automation
    • Labor zero, post-labor economics, and the fear beneath productivity
    • Status, money, degrees, and inherited stories about worth
    • Rest as resistance and nervous system regulation
    • AI, trust erosion, and the danger of slow confusion
    • Dopamine, addiction, and withdrawal at a societal scale
    • Why connection may be the real antidote

    Sources:

    • David Shapiro's Substack on Labor Zero: https://daveshap.substack.com/p/im-starting-a-movement
    • He, She, and It by Marge Piercy: https://en.wikipedia.org/wiki/He,_She_and_It
    • Ethan Mollick's Substack on the temptation of The Button: https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptation
    • Rest Is Resistance by Tricia Hersey: https://blackgarnetbooks.com/item/oR7uwsLR1Xu2xerrvdfsqA
    • The Last Invention (AI Podcast): https://podcasts.apple.com/us/podcast/the-last-invention/id1839942885

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    1 h y 3 m
  • Best of 2025: AI, Work, Resistance, and What We Learned
    Dec 31 2025

    Best of 2025 brings together some of the most impactful conversations from this year on Women Talkin’ Bout AI.

    In this episode, we revisit our top 5 episodes of the year:

    • Beyond Work: Post-Labor Economics with David Shapiro: A conversation about automation, empathy, and what remains uniquely human as AI reshapes work.
    • Refusing the Drumbeat with Melanie Dusseau and Miriam Reynoldson: A discussion on resistance in higher education and their open letter refusing the push to adopt generative AI in the classroom.
    • Once You See It, You Can’t Unsee It: The Enshittification of Tech Platforms: Jessica and Kimberly unpack enshittification and why so many tech platforms feel like they get worse over time.
    • Maternal AI and the Myth of Women Saving Tech with Michelle Morkert: A critical examination of “maternal AI” and what gendered narratives reveal about power and responsibility in tech.
    • Competing with Free: Why We Closed Moxie: A candid reflection on what it was like to build, and ultimately shut down, an AI startup in this moment.

    We’re heading into 2026 with some incredible guests and conversations we can’t wait to share.

    Thank you for listening, for thinking with us, and for staying curious alongside us.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    39 m
  • The Trojan Horse of AI
    Dec 24 2025

    In this final guest episode of the year, we explore AI as a kind of Trojan horse: a technology that promises one thing while carrying hidden costs inside it. Those costs show up in data centers, energy and water systems, local economies, and the communities asked to host the infrastructure that makes AI possible.

    We’re joined by Jon Ippolito and Joline Blais from the University of Maine for a conversation that starts with AI’s environmental footprint and expands into questions of extraction, power, education, and ethics.

    In this episode, we discuss:

    • Why AI can function as a Trojan horse for data extraction and profit
    • What data centers actually do, and why they matter
    • The environmental costs hidden inside “innovation” narratives
    • The difference between individual AI use and industrial-scale impact
    • Why most data center activity isn’t actually AI
    • How communities are pitched data centers—and what’s often left out
    • The role of gender in ethical decision-making in tech
    • What AI is forcing educators to rethink about learning and work
    • Why asking “Who benefits?” still cuts through the hype
    • And how dissonance can be a form of clarity

    Resources mentioned:

    • IMPACT Risk framework: https://ai-impact-risk.com
    • What Uses More:
      https://what-uses-more.com

    Guests:

    • Jon Ippolito – artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine.
    • Joline Blais – researches regenerative design, teaches digital storytelling and permaculture, and advises the Terrell House Permaculture Center at the University of Maine.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    1 h y 21 m
  • Easy for Humans, Hard for Machines: The Paradox Nobody Talks About
    Dec 17 2025

    Why can AI crush law exams and chess grandmasters, yet still struggle with word games? In this episode, Kimberly and Jessica use Moravec's Paradox to unpack why machines and humans are "smart" in such different ways—and what that means for how we use AI at work and in daily life.

    They start with a practical fact-check on agentic AI: what actually happens to your data when you let tools like ChatGPT or Gemini access your email, calendar, or billing systems, and which privacy toggles are worth changing. From there, they dive into why AI fails at the New York Times' Connections game, how sci-fi anticipated current concerns about AI psychology decades ago, and what brain-computer interfaces like Neuralink tell us about embodiment and intelligence.

    Along the way: sycophantic bias, personality tests for language models, why edtech needs more friction, and a lighter "pit and peach" segment with unexpected life hacks.

    Resources by Topic

    Privacy & Security (ChatGPT)

    OpenAI Memory & Controls (Official Guide)

    OpenAI Data Controls & Privacy FAQ

    OpenAI Blog: Using ChatGPT with Agents

    Moravec's Paradox & Cognitive Science

    Moravec's Paradox (Wikipedia)

    "The Moravec Paradox" - Research Paper

    Sycophancy & LLM Behavior

    "Sycophancy in Large Language Models: Causes and Mitigations" (arxiv)

    "Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality"

    Brain-Computer Interfaces & Embodied AI

    Neuralink: "A Year of Telepathy" Update

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    46 m
  • AI Agents Shift, Not SAVE, Your Time (Don't Be Fooled by Marketing Hype)
    Dec 10 2025

    What happens when you automate away a six-hour task? You don't get more free time ... you just do more work.

    In this impromptu conversation, Kimberly and Jessica break down what agentic AI actually does, why the "time savings" narrative misses the point entirely, and how to figure out which workflows are worth automating.

    WHAT WE COVER:

    • What agentic AI actually is (and how it's different from ChatGPT)
    • Jessica's real invoice automation workflow: how she turned 6 hours of manual work into an AI agent task
    • The framework for identifying automatable workflows (repetitive, skill-free, multi-step tasks)
    • Why this beats creative AI work: no judgment calls, just execution
    • The Blackboard experiment: what happens when an agent does something you didn't ask it to do
    • Security & trust: passwords, login credentials, and where your data actually goes
    • Enterprise-level agent solutions (and why they're not quite ready yet)
    • The uncomfortable truth: freed-up time doesn't mean fewer hours—it means more output
    • How detailed instruction manuals prepared Jessica for prompt engineering
    • The human bottleneck: why your whole organization has to move at the same speed
    • Why marketing and research are next on the chopping block

    TOOLS MENTIONED:

    • ChatGPT Pro with Agents — https://openai.com/chatgpt/
    • Perplexity Comet (agentic browser) — https://www.perplexity.ai/comet
    • Zoho Billing — https://www.zoho.com/billing/
    • Constant Contact — https://www.constantcontact.com
    • Zapier — https://zapier.com
    • Elicit (systematic reviews & literature analysis) — https://elicit.com
    • Corpus of Contemporary American English — https://www.english-corpora.org/coca/
    • Descript — https://www.descript.com
    • Canva — https://www.canva.com
    • Riverside.fm — https://riverside.fm

    TIMESTAMPS:

    • 0:00 — Opening & guest cancellation
    • 1:18 — Podcast website & jingle development (and why music taste is complicated)
    • 6:34 — What is agentic AI? Jessica's invoice automation example
    • 10:33 — Why this use case actually works
    • 14:15 — The Blackboard incident (when the agent went off-script)
    • 16:21 — Security concerns: passwords, login credentials, and trust
    • 18:35 — Why speed doesn't matter (as long as it's faster than human bottleneck)
    • 19:27 — Enterprise solutions on the horizon
    • 20:57 — United Airlines cease-and-desist letters for replica training sites
    • 22:27 — Why Kimberly can't use agents in her CCRC work
    • 25:21 — How to identify your automatable workflows (the practical framework)
    • 27:57 — Research automation with Elicit & corpus linguistics
    • 30:45 — The core insight: AI shifts time, it doesn't save it
    • 34:10 — Organizational bottlenecks & human capacity limits
    • 35:08 — Pit & Peach (staying in your own canoe)

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    38 m
  • Once You See It, You Can't Unsee It: The Enshitification of Tech Platforms
    Nov 26 2025

    In this conversation, Kimberly Becker and Jessica Parker explore the concept of 'enshitification'—as articulated by Cory Doctorow in his book Enshittification: Why Everything Suddenly Got Worse and What To Do About It—as it relates to generative AI and tech platforms. They discuss the stages of platform development, the shift from individual users to business customers, and the implications of algorithmic changes on user experience.

    The conversation also explores the work of AI researchers Emily M. Bender and Timnit Gebru, whose paper "On the Dangers of Stochastic Parrots" raised critical questions about the limitations and risks of large language models. The hosts explore the role of data privacy, the impact of AI on labor, the need for regulation, and the dangers of market consolidation, using case studies like Amazon's acquisition and eventual shutdown of Diapers.com and Google's Project Maven controversy.

    Key Takeaways

    • Enshitification refers to the degradation of tech platforms over time
    • The shift from individual users to business customers can lead to worse outcomes for end users
    • Data privacy is a critical concern as companies monetize user interactions
    • AI is predicted to significantly displace workers in coming years
    • Regulation is necessary to protect consumers from unchecked corporate power
    • Market consolidation can stifle competition and innovation
    • Recognizing these patterns is essential for navigating the tech landscape

    Further Reading & Resources

    • Cory Doctorow's Pluralistic blog
    • The Internet Con: How to Seize the Means of Computation
    • 2024 Tech Layoffs Tracker

    Streamlined "Top Links" Version (if you want minimal show notes):

    • Cory Doctorow on Enshittification
    • Enshittification book
    • "On the Dangers of Stochastic Parrots" by Bender & Gebru
    • Amazon/Diapers.com case study
    • Google Project Maven controversy
    • AI job displacement tracker
    • 2024 Tech Layoffs Tracker

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    58 m
  • Maternal AI and the Myth of Women Saving Tech
    Nov 19 2025

    In this conversation, we sit down with Dr. Michelle Morkert, a global gender scholar, leadership expert, and founder of the Women’s Leadership Collective, to unpack the forces shaping women’s relationship with AI.

    We begin with research indicating that women are 20–25% less likely to use AI than men, but quickly move beyond the statistics to explore the deeper social, historical, and structural reasons why.

    Dr. Morkert brings her feminist and intersectional perspective to these questions, offering frameworks that help us see beyond the surface-level narratives of gender and AI use. This conversation is less about “women using AI” and more about power, history, social norms, and the systems we’re all navigating.

    If you’ve ever wondered why AI feels different for women—or what a more ethical, community-driven approach to AI might look like—this episode is for you.

    💬 Guest: Dr. Michelle Morkert – https://www.michellemorkert.com

    📚 Books & Scholarly Works Mentioned

    • Global Evidence on Gender Gaps
      and Generative AI:
      https://www.hbs.edu/ris/Publication%20Files/25023_52957d6c-0378-4796-99fa-aab684b3b2f8.pdf
    • Pink Pilled: Women and the Far Right (Lois Shearing): https://www.barnesandnoble.com/w/pink-pilled-lois-shearing/1144991652l
    • Scary Smart (Mo Gawdat – maternal AI concept)
      https://www.mogawdat.com/scary-smart


    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Más Menos
    1 h y 1 m
adbl_web_global_use_to_activate_DT_webcro_1694_expandible_banner_T1