EU AI Act Faces Major Overhaul: High-Risk Rules Delayed to 2027 as Europe Tightens Ban on Deepfake Nudity Podcast Por  arte de portada

EU AI Act Faces Major Overhaul: High-Risk Rules Delayed to 2027 as Europe Tightens Ban on Deepfake Nudity

EU AI Act Faces Major Overhaul: High-Risk Rules Delayed to 2027 as Europe Tightens Ban on Deepfake Nudity

Escúchala gratis

Ver detalles del espectáculo
Imagine this: it's March 23, 2026, and I'm huddled in my Berlin apartment, laptop glowing as notifications ping about the EU AI Act's latest twists. Just days ago, on March 18, the European Parliament's Internal Market and Civil Liberties committees voted 101 to 9 to back postponing high-risk AI rules, fearing standards won't be ready by August 2. MEPs want fixed dates for legal certainty—pushing Annex III high-risk systems like those in education and employment to December 2027, and product safety ones to August 2028. They're even proposing a ban on AI nudifier systems that strip clothes from images without consent, alongside Council ideas to outlaw non-consensual intimate imagery and CSAM generators.

This omnibus simplification package, kicked off by the European Commission's November 2025 digital omnibus, is racing toward a plenary vote on March 26. If approved, trilogues with the Council—whose position dropped March 13—could reshape compliance before the crunch. Providers get a breather on watermarking AI-generated audio, images, video, or text, with MEPs eyeing November 2, 2026, shorter than the Commission's February 2027 pitch. No more mandatory AI literacy for staff; instead, the Commission and member states will foster it. And the EU AI Office? It's gaining exclusive muscle over systems blending general-purpose AI models, sidelining some national watchdogs except in critical spots like infrastructure or law enforcement.

Think about it, listeners: energy giants from exploration to grid ops, per Baker Botts analysis, face €15 million fines or 3% global turnover hits if high-risk tools falter come deadline. Legal Nodes urges audits now—map every AI, from in-house models to third-party chatbots, classify by risk tiers: unacceptable like social scoring (banned since February 2025), high-risk demanding risk management and oversight, limited-risk needing transparency labels, or minimal like spam filters. Extraterritorial claws snag non-EU firms serving Europe; appoint reps or bust.

As Oliver Patel notes on his Substack, today's Act stands firm until amendments land—August 2, 2026, looms for high-risk rollout. Europe's risk-based fortress contrasts Trump's March 20 White House AI framework, begging the question: will phased enforcement stifle innovation or safeguard rights? Control Risks highlights sandboxes for testing, easing data friction. In Brussels' corridors, this isn't just bureaucracy; it's wiring our future—where AI amplifies humanity or erodes it.

Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.

Some great Deals https://amzn.to/49SJ3Qs

For more check out http://www.quietplease.ai

This content was created in partnership and with the help of Artificial Intelligence AI
Todavía no hay opiniones