Artificial Intelligence Act - EU AI Act Podcast Por Quiet. Please arte de portada

Artificial Intelligence Act - EU AI Act

Artificial Intelligence Act - EU AI Act

De: Quiet. Please
Escúchala gratis

Acerca de esta escucha

Welcome to "The European Union Artificial Intelligence Act" podcast, your go-to source for in-depth insights into the groundbreaking AI regulations shaping the future of technology within the EU. Join us as we explore the intricacies of the AI Act, its impact on various industries, and the legal frameworks established to ensure ethical AI development and deployment.

Whether you're a tech enthusiast, legal professional, or business leader, this podcast provides valuable information and analysis to keep you informed and compliant with the latest AI regulations.

Stay ahead of the curve with "The European Union Artificial Intelligence Act" podcast – where we decode the EU's AI policies and their global implications. Subscribe now and never miss an episode!

Keywords: European Union, Artificial Intelligence Act, AI regulations, EU AI policy, AI compliance, AI risk management, technology law, AI ethics, AI governance, AI podcast.

Copyright 2024 Quiet. Please
Economía Política y Gobierno
Episodios
  • Europe's AI Reckoning: The EU's Groundbreaking Regulation Shakes Up the Tech Landscape
    Jul 14 2025
    Imagine waking up in Brussels this morning and realizing that, as of a few weeks ago, the European Union’s AI Act is no longer just the stuff of policy briefings and think tank debates—it’s a living, breathing regulation that’s about to transform tech across the continent and beyond. Effective since February, the EU AI Act is carving out a new global reality for artificial intelligence, and the clock is ticking—August 2, 2025, is D-day for the new transparency rules targeting General-Purpose AI models. That means if you’re building, selling, or even adapting models like GPT-4, DALL-E, or Google’s Gemini for the EU market, you’re now on the hook for some of the world’s most comprehensive and contentious AI requirements.

    Let’s get specific. The law is already imposing AI literacy obligations across the board: whether you’re a provider, a deployer, or an importer, you need your staff to have a real grasp of how AI works. No more black-box mystique or “it’s just an algorithm” hand-waving. By August, anyone providing a General-Purpose AI model will have to publish detailed summaries of their training data, like a nutrition label for algorithms. And we’re not talking about vague assurances. The EU is demanding documentation “sufficiently detailed” to let users, journalists, and regulators trace the DNA of what these models have been fed. Think less ‘trust us,’ more ‘show your work—or risk a €15 million fine or 3% of worldwide annual turnover.’ These are GDPR-level risks, and the comparison isn’t lost on anyone in tech.

    But let’s not pretend it’s frictionless. In the past week alone, Airbus, Siemens Energy, Lufthansa, ASML, and a who’s-who of European giants fired off an open letter begging the European Commission for a two-year delay. They argue the rules bring regulatory overload, threaten competitiveness, and, with key implementation standards still being thrashed out, are almost impossible to obey. The Commission has so far said no—August 2 is still the target date—but Executive Vice President Henna Virkkunen has left a crack in the door, hinting at “targeted delays” if essential standards aren’t ready.

    This tension is everywhere. The voluntary Code of Practice released July 10 is a preview of the coming world: transparency, stricter copyright compliance, and systemic risk management. Companies like OpenAI and Google are reviewing the text; Meta and Amazon are holding their cards close. There’s a tug-of-war between innovation and caution, global ambition and regulatory rigor.

    Europe wants to be the AI continent—ambitious, trusted, safe. Yet building rules for tech that evolves while you write the legislation is an impossible engineering problem. The real test starts now: will the AI Act make Europe the model for AI governance, or slow it down while others—looking at you, Silicon Valley and Shanghai—race ahead? The debate is no longer theoretical, and as deadlines close in, the world is watching.

    Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai
    Más Menos
    3 m
  • Headline: Navigating the Labyrinth of the EU AI Act: A Race Against Compliance and Innovation
    Jul 12 2025
    Today is July 12th, 2025, and if you thought European bureaucracy moved slowly, let me introduce you to the EU AI Act — which is somehow both glacial and frighteningly brisk at the same time. Since February, this sweeping new law has been the talk of Brussels, Berlin, Paris, and, frankly, anyone in tech who gets heart palpitations at the mere mention of “compliance matrix.” The AI Act is now the world’s most ambitious legal effort to bring artificial intelligence systems to heel — classifying them by risk, imposing bans on “unacceptable” uses, and, for the AI providers and deployers among us, requiring that staff must now be schooled in what the Act coyly calls “AI literacy.”

    But let’s not pretend this is all running like automated clockwork. As of this week, the European Commission just published the final Code of Practice for general-purpose AI, or GPAI, those foundational models like GPT-4, DALL-E, and their ilk. Now, OpenAI, Google, and other industry titans are frantically revisiting their compliance playbooks, because the fines for getting things wrong can reach up to 7% of global turnover. That’ll buy a lot of GPU clusters, or a lot of legal fees. The code is technically “voluntary,” but signing on means less red tape and more legal clarity, which, in the EU, is like being handed the cheat codes for Tetris.

    Transparency is the new battle cry. The Commission’s Henna Virkkunen described the Code as a watershed for “tech sovereignty.” Now, AI companies that sign up will need to share not only what their models can do, but also what they were trained on — think of it as a nutrition label for algorithms. Paolo Lazzarino from the law firm ADVANT Nctm says this helps us judge what’s coming out of the AI kitchen, data by data.

    Yet, not everyone is popping champagne. More than forty-five heavyweights of European industry — Airbus, Siemens Energy, even Mistral — have called for a two-year “stop the clock” on the most burdensome rules, arguing the AI Act’s moving goalposts and regulatory overload could choke EU innovation right when the US and China are speeding ahead. Commission spokesperson Thomas Regnier, unwavering, stated: “No stop the clock, no pause.” And don’t expect any flexibility from Brussels in trade talks with Washington or under the Digital Markets Act: as far as the EU is concerned, these rules are about European values, not bargaining chips.

    Here’s where it gets interesting for the would-be compliance artist: the real winners in this grand experiment might be the very largest AI labs, who can afford an armada of lawyers and ethicists, while smaller players are left guessing at requirements — or quietly shifting operations elsewhere.

    So, will the Act make Europe the world’s beacon for ethical AI, or just a museum for lost startups? The next few months will tell. Thanks for tuning in, listeners. Don’t forget to subscribe for more — this has been a quiet please production, for more check out quiet please dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai
    Más Menos
    3 m
  • EU's AI Act Rewrites the Global AI Rulebook
    Jul 10 2025
    Welcome to the era where artificial intelligence isn’t just changing our world, but being reshaped by law at an unprecedented pace. Yes, I’m talking about the European Union’s Artificial Intelligence Act, the so-called AI Act, which, as of now, is rapidly transitioning from legislative text to concrete reality. The Act officially entered into force last August, and the compliance countdown is all but inescapable, especially with that pivotal August 2, 2025 deadline looming for general-purpose AI models.

    Let’s get right to it: The EU AI Act is the world’s first comprehensive regulatory framework for AI, designed to set the rules of play not only for European companies, but for any organization worldwide that wants access to the EU’s massive market. Forget the GDPR—this is next-level, shaping the global conversation about AI accountability, safety, and risk.

    Here’s how it works. The AI Act paints AI risk with a bold palette: unacceptable, high, limited, and minimal risk. Unacceptable risk? Banned outright. High-risk? Think biometric surveillance, AI in critical infrastructure, employment—those undergo strict compliance and transparency measures. Meanwhile, your run-of-the-mill chatbots? Minimal or limited risk, with lighter obligations. And then there’s the beast: general-purpose AI models, like those powering the latest generative marvels. These are subject to special transparency and evaluation rules, with slightly fewer hoops for open source models.

    Now, if you’re hearing a faint whirring sound, that’s the steady hum of tech CEOs furiously lobbying Brussels. Just last week, leaders from companies like ASML, Meta, Mistral, and even Carrefour threw their weight behind an open letter—46 European CEOs asking the Commission to hit pause on the AI Act. Their argument? The guidelines aren’t finalized, the compliance landscape is murky, and Europe risks throttling its own innovation before it can compete with the US and China. They call their campaign #stoptheclock.

    But the EU Commission’s Thomas Regnier shot that down on Friday—no stop the clock, no grace period, and absolutely no pause. The timeline is the timeline: August 2025 for general-purpose models, August 2026 for high-risk models, and phased requirements in between. And for the record, this is no empty threat—the Act creates national notifying bodies, demands conformity assessments, and empowers a European Artificial Intelligence Board to keep Member States in line.

    What’s more, as of February, every provider and deployer of AI in the EU must ensure their staff have a “sufficient level of AI literacy.” That’s not just a suggestion; it’s law. The upshot? Organizations are scrambling to develop robust training programs and compliance protocols, even as the final Code of Practice for General-Purpose AI models is still delayed, thanks in part to lobbying from Google, Meta, and others.

    Will this new regulatory order truly balance innovation and safety? Or will Europe’s bold move become a cautionary tale for overregulation in AI? Only time will tell, but one thing is certain: the next year is make or break for every AI provider with European ambitions.

    Thanks for tuning in, and don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.

    Some great Deals https://amzn.to/49SJ3Qs

    For more check out http://www.quietplease.ai
    Más Menos
    4 m
Todas las estrellas
Más relevante  
It’s now possible to set up any text a little bit and put appropriate pauses and intonation in it. Here is just a plain text narrated by artificial intelligence.

Artificial voice, without pauses, etc.

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.