
AI, medicine, models and accountability: could AI play a medical doctor?
No se pudo agregar al carrito
Add to Cart failed.
Error al Agregar a Lista de Deseos.
Error al eliminar de la lista de deseos.
Error al añadir a tu biblioteca
Error al seguir el podcast
Error al dejar de seguir el podcast
-
Narrado por:
-
De:
As IA evolves, a big issue is accountability. As humans may work alongside AI, IA can also make mistakes, even if those mistakes are smaller than humans. I have attached several conversations of mine, Jorge Guerra Pires, PhD, on the topic. When possible, mention the name of the other person. I am concerned about AI in medicine, models in medicine, but certainly we have issues, like accountability. We have also opportunities: models are better than humans in several points, such as information integration, big data processing faster and and more.
Sources:
https://www.youtube.com/watch?v=Qt77BVJNCAs&t=17s
https://www.youtube.com/watch?v=DOL_QZoi7IY&t=122s
https://www.youtube.com/watch?v=bDtCeofcBOk
https://www.youtube.com/watch?v=Gqjd_gJ13n8&t=7s
https://medium.com/computational-thinking-how-computers-think-decide/could-chatgpt-play-a-medical-doctor-8dfcd8c95538
These sources explore the intersection of artificial intelligence, specifically ChatGPT, and the field of medicine. They discuss ChatGPT's capabilities, including its ability to perform well on medical exams and generate human-like responses, and consider whether it could serve as a companion or even a replacement for medical doctors. The texts also raise important ethical questions regarding the responsibility for errors when AI is used in medical decision-making and highlight the potential of AI to improve mathematical modeling in biology and make it more accessible. The authors emphasize the need for further discussion and research on the integration of AI into healthcare and scientific practice.