• AI can't read the room

  • Apr 28 2025
  • Duración: 6 m
  • Podcast
  • Resumen

  • Leyla Isik, a professor of cognitive science at Johns Hopkins University, is also a senior scientist on a new study looking at how good AI is at reading social cues. She and her research team took short videos of people doing things — two people chatting, two babies on a playmat, two people doing a synchronized skate routine — and showed them to human participants. After, they were asked them questions like, are these two communicating with each other? Are they communicating? Is it a positive or negative interaction? Then, they showed the same videos to over 350 open source AI models. (Which is a lot, though it didn't include all the latest and greatest ones out there.) Isik found that the AI models were a lot worse than humans at understanding what was going on. Marketplace’s Stephanie Hughes visited Isik at her lab in Johns Hopkins to discuss the findings.

    Más Menos
adbl_web_global_use_to_activate_webcro768_stickypopup

Lo que los oyentes dicen sobre AI can't read the room

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.