Killer Robots and Mass Surveillance Podcast Por  arte de portada

Killer Robots and Mass Surveillance

Killer Robots and Mass Surveillance

Escúchala gratis

Ver detalles del espectáculo
This week we talk about Anthropic, the Department of Defense, and OpenAI.We also discuss red lines, contracts, and lethal autonomous systems.Recommended Book: Empire of AI by Karen HaoTranscriptLethal autonomous weapons, often called lethal autonomous systems, autonomous weapons systems, or just ‘killer robots,’ are military hardware that can operate independent of human control, searching for and engaging with targets based on their programming and thus not needing a human being to point it at things or pull the trigger.The specific nature and capabilities of these devices vary substantially from context to content, and even between scholars writing on the subject, but in general these are systems—be they aerial drones, heavy gun emplacements, some kind of mobile rocket launcher, or a human- or dog-shaped robot—that are capable of carrying out tasks and achieving goals without needing constant attention from a human operator.That’s a stark contrast with drones that require either a human controlled or what’s called a human-in-the-loop in order to make decisions. Some drones and other robots and weapons require full hands-on control, with a human steering them, pointing their weapons, and pulling the trigger, while others are semi-autonomous in that they can be told to patrol a given area and look for specific things, but then they reach out to a human-in-the-loop to make final decisions about whatever they want to do, including and especially weapon-related things; a human has to be the one to drop the bomb or fire the gun in most cases, today.Fully autonomous weapon systems, without a human in the loop, are far less common at this point, in part because it’s difficult to create a system so capable that it doesn’t require human intervention at times, but also because it’s truly dangerous to create such a device.Modern artificial intelligence systems are incredibly powerful, but they still make mistakes, and just as an LLM-based chatbot might muddle its words or add extra fingers to a made-up person in an image it generates, or a step further, might fabricate research referenced in a paper it produces, an AI-controlled weapon system might see targets where there are no targets, or might flag a friendly, someone on its side, or a peaceful, noncombatant human, as a target. And if there’s no human-in-the-loop to check the AI’s understanding and correct it, that could mean a lot of non-targets being treated like targets, their lives ended by killer robots that gun them down or launch a missile at their home.On a larger scale, AI systems controlling arrays of weapons, or even entire militaries, becoming strategic commanders, could wipe out all human life by sparking a nuclear war.A recent study conducted at King’s College London found that in simulated crises, across 21 scenarios, AI systems which thought they had control of nation-state-scale militaries opted for nuclear signaling, escalation, and tactical nuclear weapon use 95% of the time, never once across all simulations choosing to use one of the eight de-escalatory options that were made available to them.All of which suggests to the researchers behind this study that the norm, approaching the level of taboo, associated with nuclear weapons use globally since WWII, among humans at least, may not have carried over to these AI systems, and full-blown nuclear conflict may thus become more likely under AI-driven military conditions.What I’d like to talk about today is a recent confrontation between one AI company—Anthropic—and its client, the US Department of Defense, and the seeming implications of both this conflict, and what happened as a result.—In late-2024, the US Department of Defense—which by the way is still the official title, despite the President calling it the Department of War, since only Congress can change its name—the US DoD partnered with Anthropic to get a version of its Claude LLM-based AI model that could be used by the Pentagon.Anthropic worked with Palantir, which is a data-aggregation and surveillance company, basically, run by Peter Thiel and very favored by this administration, and Amazon Web Services, to make that Claude-for-the-US-military relationship happen, those interconnections allowing this version of the model to be used for classified missions.Anthropic received a $200 million contract with the Department of Defense in mid-2025, as did a slew of other US-based AI companies, including Google, xAI, and OpenAI. But while the Pentagon has been funding a bunch of US-based AI companies for this utility, only Claude was reportedly used during the early 2026 raid on Venezuela, during which now-former Venezuelan President Maduro was taken by US forces.Word on the street is that Claude is the only model that the Pentagon has found truly useful for these sorts of operations, though publicly they’re saying that investments in all of these models have borne fruit, at least to some degree.So ...
Todavía no hay opiniones