EU AI Act: Reshaping the Future of Technology with Accountability
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
But here's the twist that's got the tech world buzzing. Just days ago, on December 17, 2025, the European Commission dropped the first draft of its Code of Practice for marking AI-generated content, tackling Article 50 head-on. Providers of generative AI must watermark text, images, audio, and video in machine-readable formats—robust against tampering—to flag deepfakes and synthetic media. Deployers, that's you and me using these tools professionally, face disclosure duties for public-interest content unless it's human-reviewed. The European AI Office is corralling independent experts, industry players, and civil society through workshops, aiming for a final code by June 2026. Feedback poured in until January 23, with revisions slated for March. It's a collaborative sprint, not a top-down edict, designed to build trust amid the misinformation wars.
Meanwhile, Spain's Agency for the Supervision of Artificial Intelligence, AESIA, unleashed 16 guidance docs last week—introductory overviews, technical deep dives on conformity assessments and incident reporting, even checklists with templates. All in Spanish for now, but a godsend for navigating high-risk obligations like post-market monitoring. Yet, innovation hawks cry foul. Professor Toon Calders at the University of Antwerp hails it as a "quality seal" for trustworthy EU AI, boosting global faith. Critics, though, see a straitjacket stifling Europe's edge against U.S. giants and China. Enter the Digital Omnibus: European Commissioner Valdis Dombrovskis announced it recently to trim regs, potentially delaying high-risk rules—like AI in loan apps or hiring—until 2027. "We cannot afford to pay the price for failing to keep up," he warned at the presser. KU Leuven's Professor Jan De Bruyne echoes the urgency: great laws flop without enforcement.
As I sip my cooling coffee, I ponder the ripple: staffing firms inventorying AI screeners, product managers scrambling for watermark tech, all racing toward August. Will this risk-tiered regime—banning unacceptable risks outright—forge resilient AI supremacy, or hobble us in the global sprint? It's a quiet revolution, listeners, reshaping code into accountability.
Thanks for tuning in—subscribe for more tech frontiers. This has been a Quiet Please production, for more check out quietplease.ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
This content was created in partnership and with the help of Artificial Intelligence AI
Todavía no hay opiniones