EU's AI Act Sprint: Grace Periods and Loopholes as August Deadline Looms
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
Just this week, Euractiv dropped a bombshell: the European Commission has delayed high-risk AI guidelines yet again, missing the February 2nd target and pushing back what was already a revised timeline. Member states like those in the CADE project warn that several haven't even named their national supervisory authorities. It's chaos in the implementation sprint, listeners, with CEN-CENELEC scrambling to finalize standards by late 2026 for that presumption of conformity.
Enter the AI Omnibus proposal from the Commission in November 2025, as Pinsent Masons reports—a frantic bid to lighten the load before August. They're floating grace periods: six months extra for retrofitting transparency in generative AI already out there, up to February 2027. Small and mid-cap firms get concessions on registration if self-assessments show low real-world risk. AI literacy? Shifted from companies to the Commission and states. And get this: EU-level regulatory sandboxes for SMEs, expanding those national testing grounds to fend off fragmentation.
But peel back the layers, and it's thought-provoking unease. AGPLaw outlines the risk tiers crystal clear—banned manipulative systems exploiting vulnerabilities, high-risk mandates for healthcare, law enforcement, education under Annex III, like critical infrastructure management or biometric categorization inferring sensitive traits. Providers must nail risk management, data governance, technical docs. Reed Smith clocks it alongside the Cyber Resilience Act in September and Data Act in the same breath.
Yet Cambridge Analytica's ghost haunts us, per their deep dive. The Act bans overt political profiling but greenlights behavioral inference in "low-risk" realms—marketing, ads, content recs. Think OCEAN personality models from Facebook likes, now powering Meta's $500 billion ad empire or Pymetrics' hiring games. It's surveillance capitalism rebranded as personalization: lenders profiling from app data, recommenders exploiting psych vulnerabilities. High-risk gets oversight; commerce gets a wink. Does this prevent another CA? No—it segments the infrastructure, preserving profitability while democracies breathe easier.
As August looms, businesses in Brussels boardrooms and Canadian SMEs eyeing EU clients via Onley Law are stress-testing compliance. The Act's extraterritorial bite means global ripple. Will it foster ethical innovation or stifle it with bureaucracy? One thing's sure: AI's genie's out, and Europe's rewriting the bottle.
Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
This content was created in partnership and with the help of Artificial Intelligence AI
Todavía no hay opiniones