Episodios

  • 303 - Guest: Virginia Dignum, Responsible AI Expert, part 1
    Apr 6 2026

    This and all episodes at: https://aiandyou.net/ .

    AI runs not just broadly across human interest, but deeply. And at every level it seems to create paradoxes. How can it be useful without power; how can it be safe with power. We want it to take over jobs, but still leave us with meaning and purpose. We deny it the possibility of becoming sentient, but how can we trust something that lacks compassion? My guest today has written a whole book on these paradoxes. Calling from Sweden is Virginia Dignum, professor of responsible artificial intelligence at Umeå University, and the author of the 2019 book Responsible Artificial Intelligence. She’s an internationally recognized expert in AI ethics and policy who has led initiatives for the European Commission, the United Nations, the World Economic Forum, UNESCO, and UNICEF, among others. And we are talking about her new book, The AI Paradox: How to Make Sense of a Complex Future.

    We talk about the issues of ethical choices and power that AI raises, where the difference between humans and LLMs matters, Viriginia’s path to the work she’s doing and her book, and our trend to techno-solutionism and what it means for us.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    28 m
  • 302 - Guest: Ricky Sethi, Artificial Metacognition Researcher, part 2
    Mar 30 2026

    This and all episodes at: https://aiandyou.net/ .

    Have you ever thought about your thoughts? About what or how you’re thinking? It gets real meta real fast, doesn’t it? That’s called metacognition, and humans and certain other creatures do it. But what about AI? We’re coming back to the interview with Ricky Sethi, Professor of Computer Science at Fitchburg State University, and researcher into artificial metacognition, or whether and how machines can think about thinking. Ricky’s research spans fact-checking misinformation, virtual communities, and artificial metacognition, where he focuses on designing GenAI systems that can monitor, evaluate, and regulate their own reasoning. He is Director of Research for the Madsci Network, and an Adjunct Professor at Worcester Polytechnic Institute. He has over 50 scholarly publications, and his work has been covered in outlets such as the Chicago Tribune, The Conversation, and Communications of the ACM.

    We conclude the interview by talking about his research into disinformation, measuring the emotions associated with it, how clusters of models in different roles could assess AIs for lying and implement AI safety, and how AI is impacting the job opportunities in research.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    36 m
  • 301 - Guest: Ricky Sethi, Artificial Metacognition Researcher, part 1
    Mar 23 2026

    This and all episodes at: https://aiandyou.net/ .

    Have you ever thought about your thoughts? About what or how you’re thinking? It gets real meta real fast, doesn’t it? That’s called metacognition, and humans and certain other creatures do it. But what about AI? Can it think about thinking? Here to help us understand this whole thing is artificial metacognition researcher Ricky Sethi. He is Professor of Computer Science at Fitchburg State University, Director of Research for the Madsci Network, and an Adjunct Professor at Worcester Polytechnic Institute. His research spans fact-checking misinformation, virtual communities, and artificial metacognition, where he focuses on designing GenAI systems that can monitor, evaluate, and regulate their own reasoning. Is that cool or what? Ricky has a bachelors in neurobiology and physics, an M.S. in physics/information systems, and a PhD in AI from UC Riverside. He has over 50 scholarly publications, and his work has been covered in outlets such as the Chicago Tribune, The Conversation, and Communications of the ACM. Recently, he has introduced the Metacognitive State Vector framework for quantifying key cognitive signals in ensembles of large language models.

    We talk about how this spans computer science, neuroscience, and psychology; System One and System Two thinking come up again, with a beautiful explanation. We also talk about testing and measuring metacognition in humans and AIs – and what about dolphins?

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    36 m
  • 300 - Guest: Mark Peres, Civic Entrepreneur, part 2
    Mar 16 2026

    This and all episodes at: https://aiandyou.net/ .

    Because AI touches our lives down to our core where our emotions and subconscious reside, we need to be touched with the important lessons that our fellow humans wish to communicate about AI through vehicles like art, poetry, and, in the case of today’s guest, fiction. Mark Peres is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson & Wales University. He’s just published The Accord, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla.

    As much as that sounds like a description of any number of sensationalist and shallow works that you and I could name, this is not in that category. I found his book remarkable for the level of maturity it granted the reader and the no-holds-barred courage with which it tackled issues of the identity of a future artificial general intelligence - which may not be so far in the future any more.

    We talk about why the AI character of Lyla has a true sense of identity and mortality, whether control over advanced AI is possible, principles for human–AI coexistence, what responsible use, transparency, and “cognitive autonomy” look like for today’s university students, what it means to “humanize” AI before trying to regulate it, and how to take responsibility for our future with AI.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    34 m
  • 299 - Guest: Mark Peres, Civic Entrepreneur, part 1
    Mar 9 2026

    This and all episodes at: https://aiandyou.net/ .

    We shouldn’t learn the important lessons of AI through just exposition, through just descriptions, explanations, lessons, and textbooks. But because AI touches our lives down to our core where our emotions and subconscious reside, we need to be similarly touched with the important lessons that our fellow humans wish to communicate about AI through art, poetry, and, in the case of today’s guest, fiction. Mark Peres is a professor, author, and civic innovator with decades of experience teaching leadership and ethics at Johnson & Wales University. He’s just published The Accord, a powerful speculative novel exploring the relationship between a philosopher and a sentient general AI, Lyla.

    I will say right now that whatever stereotypes that description is evoking in you right now – and god knows there are lots to choose from – I want you to throw them out. And you’ll soon see why as we talk about the big questions the story raises about consciousness, responsibility, and what we owe intelligent machines, the clash between universities, corporations, and government forces trying to control new technology, the growing role of AI as companion and confidant in everyday life, and the real-world headlines and classroom debates that inspired the book.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    29 m
  • 298 - Guest: Holly Elmore, AI Pause Advocate, part 2
    Mar 2 2026

    This and all episodes at: https://aiandyou.net/ .

    In 2023 a global movement called Pause AI started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, Holly Elmore. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the Pause AI US group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.

    We conclude the interview by talking about what would actually flip public opinion on AI safety, specific AI bills and regulations, why some leaders warn about risk while accelerating anyway, whether and when it would be safe to unpause, and how you can get involved.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    37 m
  • 297 - Guest: Holly Elmore, AI Pause Advocate, part 1
    Feb 23 2026

    This and all episodes at: https://aiandyou.net/ .

    In 2023 a global movement called Pause AI started, advocating for a pause in the development of powerful AI, and on the show we have its co-founder, Holly Elmore. Their website says “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” Holly is the founder of the Pause AI US group, and has organized protests in their name. She was formerly an evolutionary biologist, with a PhD from Harvard.

    We talk about what the Pause movement stands for, overlaps with animal welfare strategies, why pausing is an effective aim and why we need it, the pros and cons of limiting AI training by compute metrics, and comparing AI safety to the airline industry.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    38 m
  • 296 - Guest: Maya Ackerman, Creative AI Pioneer, part 2
    Feb 16 2026

    This and all episodes at: https://aiandyou.net/ .

    One of the great wounds people are experiencing around AI is in creativity. Look at the writers’ and actors’ strikes, for example. I continue talking about this very sensitive subject with Maya Ackerman, author of the new book Creative Machines: AI, Art, and Us, which tackles it head on, full of emotion, vulnerability, and poetry.

    Maya is the CEO and co-founder of Wave AI, and professor of Computer Science at Santa Clara University. She completed postdoctoral fellowships at Caltech and UC San Diego, and has authored over 50 peer-reviewed publications. She was named a Woman of Influence by the Silicon Valley Business Journal and her work has been featured in Forbes, NPR, Fortune, and NBC News. She is also a singer, pianist, and songwriter.

    We talk about experiments in machine creativity, the distinction between creative processes and creative products and the role of the observer in the creative experience, how bias against AI shows up, and how AI that’s constructed around compassion and ethical stewardship could support deeper human flourishing in the next few years.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

    Más Menos
    31 m