Taming the machine: Why regulating AI feels impossible (but we have to try anyways) Podcast Por  arte de portada

Taming the machine: Why regulating AI feels impossible (but we have to try anyways)

Taming the machine: Why regulating AI feels impossible (but we have to try anyways)

Escúchala gratis

Ver detalles del espectáculo

Acerca de esta escucha

"If AI didn’t offer such massive opportunities... we’d likely regulate it out of existence." On the latest episode of the Executive Summary, professor Dan Trefler explores the double-edged sword of artificial intelligence: Are the risks worth the rewards? Is bureaucratic red tape the solution — or just another hurdle? And how can the average citizen help fight the "great regulatory" battle?

Show notes:

[0:00] In 2023, tech leaders and academics signed a letter agreeing to hold off on future AI development until government regulation caught up…spoiler alert: it didn’t.

[0:48] Five years ago, it would have been impossible to imagine where AI development was going to be today…what will we see in the next five years?

[1:36] Meet Dan Trefler, a professor of economics and policy at the Rotman School of Management.

[2:29] Regulating “Artificial Intelligence” is impossible.

[3:50] What’s the 2025 state of affairs when it comes to regulating uses of AI?

[4:29] Dan sees one region of the world regulating the tech use about as well as they can.

[7:12] What is the competition problem?

[7:48] What is the coordination problem?

[8:29] What happens when we have competition and coordination working together seamlessly?

[9:46] So why can’t AI regulations follow the same successful model as car regulations?

[10:19] What’s the interpretability problem?

[11:18] California’s failed attempt at regulating AI companies is the perfect microcosm of the challenges we face.

[12:45] Where is the last place governments should regulate?

[13:49] To get a handle on things now, Dan wants us to focus on (1) extreme risks;

[14:28] (2) learning from other successful regulatory bodies like the FDA;

[14:49] and (3) exploring regulatory incentives that encourage positive uses of the technology.

[15:33] And citizens can help wage the great AI regulatory battle with their own personal choices.

[16:03] “I'm asking people to be much more forward looking than we normally tend to be. I want them to start anticipating risks which don't exist yet, because when they do come, as we've seen with past changes in AI, they will come in such a flurry that we won't be able to shovel our way out of our own homes. So let's start thinking hard about regulating things on a precautionary principle, not because they've happened, but because they might happen.”

Todavía no hay opiniones