EU AI Act Transforms from Theory to Operational Reality, Shaping Global Tech Landscape
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
Here’s the pivot: as of this year, bans on so‑called “unacceptable risk” AI are no longer theory. According to the European Commission and recent analysis from Truyo and Electronic Specifier, systems for social scoring, manipulative nudging, and certain real‑time biometric surveillance are now flat‑out illegal in the European Union. That’s not ethics talk; that’s market shutdown talk.
Then, in August 2025, the spotlight swung to general‑purpose AI models. King & Spalding and ISACA both point out that rules for these GPAI systems are now live: transparency, documentation, and risk management are no longer “nice to have” – they’re compliance surfaces. If you’re OpenAI, Anthropic, Google DeepMind, or a scrappy European lab in Berlin or Paris, the model card just turned into a quasi‑legal artifact. And yes, the EU backed this with a Code of Practice that many companies are treating as the de facto baseline.
But here’s the twist from the last few weeks: the Digital Omnibus package. The European Commission’s own digital‑strategy site confirms that on 19 November 2025, Brussels proposed targeted amendments to the AI Act. Translation: the EU just admitted the standards ecosystem and guidance aren’t fully ready, so it wants to delay some of the heaviest “high‑risk” obligations. Reporting from King & Spalding and DigWatch frames this as a pressure‑release valve for banks, hospitals, and critical‑infrastructure players that were staring down impossible timelines.
So now we’re in this weird liminal space. Prohibitions are in force. GPAI transparency rules are in force. But many of the most demanding high‑risk requirements might slide toward 2027 and 2028, with longstop dates the Commission can’t move further. Businesses get breathing room, but also more uncertainty: compliance roadmaps have become living documents, not Gantt charts.
Meanwhile, the European AI Office in Brussels is quietly becoming an institutional supernode. The Commission’s materials and the recent EU & UK AI Round‑up describe how that office will directly supervise some general‑purpose models and even AI embedded in very large online platforms. That’s not just about Europe; that’s about setting de facto global norms, the way GDPR did for privacy.
And looming over all of this are the penalties. MetricStream notes fines that can reach 35 million euros or 7 percent of global annual turnover. That’s not a governance nudge; that’s an existential risk line item on a CFO’s spreadsheet.
So the question I’d leave you with is this: when innovation teams in San Francisco, Bengaluru, and Tel Aviv sketch their next model architecture, are they really designing for performance first, or for the EU’s risk taxonomy and its sliding but very real deadlines?
Thanks for tuning in, and make sure you subscribe so you don’t miss the next deep dive into how law rewires technology. This has been a quiet please production, for more check out quiet please dot ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
This content was created in partnership and with the help of Artificial Intelligence AI
Todavía no hay opiniones