Episode 44: You Should Worry About A.I., But Not for the Reason You Think

New tools like Chat GPT have sparked futuristic fears about intelligent machines wiping people out. But there’s a more immediate A.I. threat coming, in a year when half the world’s population is headed to the polls.

Please note: Our show is produced for the ear and made to be heard. Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the audio before quoting in print.

###

This week is one of the busiest in U.S. politics, with 15 states and a U.S. territory voting in the presidential primaries on Super Tuesday. It seems like a perfect moment to revisit an episode we ran last fall about how new forms of artificial intelligence could wreak havoc on elections everywhere.

We’ve seen glimpses of the problem already. Voters in Slovakia last October heard an AI-generated voice of an opposition leader falsely claiming he’d raise the price of beer if he won …. and worse …

ARCHIVAL News Coverage: Just two days before voting began in that high-stakes election this audio tape began circulating online it purported to be a recording of a conversation in which Shemeshka talks about stealing the election.

And thousands of New Hampshire primary voters in January heard an AI-generated voice of Joe Biden …

ARCHIVAL Joe Biden Robot Voice: You know the value of voting Democratic when our votes count.

…discouraging them from going to the polls.

ARCHIVAL Joe Biden Robot Voice: Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.

…It was a misinformation gambit that gave new meaning to the term “robo-calling.”

In this episode you’ll learn where this new generative AI technology came from, how it might be used to confuse and sway voters, and what you can do about it.

(Theme music)

The following episode aired in October:

If you need to settle a dispute between two people arguing about which one of them's the smartest… consider making them play a game of chess.

Assuming both players know the rules, chess measures the ability to reason, strategize, and creatively think through a branching maze of moves and counter moves. It's a battle of wits. And — win, lose, or draw — the results are clear.

And in this battle of wits, one of the smartest warriors in the history of the game was a grandmaster named Garry Kasparov. In 1997 he played a match that captured the attention of the whole world.

ARCHIVAL 1990s Newscaster 1: Garry Kasparov, the Russian chess legend, is representing humanity.

That's because his opponent was a machine.

ARCHIVAL 1990s Newscaster 1: He is facing the Deep Blue IBM supercomputer, presumably representing technology. And you're looking at a live picture right now.

ARCHIVAL 1990s Newscaster 2: All the major TV networks have covered it, and it's been beamed to 20 countries around the world

Today maybe it seems obvious that a computer would excel at chess. But when the match happened in 1997, the domain name for Google.com wasn't even registered yet … smartphones were still pretty dumb …and — for all their speed — computers hadn’t yet matched a world chess champion’s ability to strategize. But IBM’s Deep Blue aimed to change all of that.

ARCHIVAL 1990s Newscaster 3: It's a test to see if the human brain can outwit a machine able to sift through 200 million moves a second.

ARCHIVAL 1990s Newscaster 4: Kasparov knows the computer will not be hobbled by human weaknesses such as a lack of concentration.

[ARCHIVAL SOUNDS OF SPECTATORS, CAMERA CLICKS]

It was a six-game series held in New York spread over more than a week, inside an auditorium packed with TV cameras and spectators.

Kasparov won the first round in amazing style. But Game Two left him stunned. Kasparov tried setting a trap with easy-to-capture pawns. But Deep Blue wasn’t tempted. Instead the machine made a much more subtle move that ultimately caused Kasparov’s defeat.

ARCHIVAL Garry Kasparov: Game 2 was not just a single loss of a game, it was a loss of a match because I couldn’t recover.

Kasparov never beat the computer again. Computer scientists were thrilled, but the outcome left plenty of other observers a little freaked out.

ARCHIVAL 1990s Newscaster 5: Call it a blow against humanity.

ARCHIVAL 1990s Newscaster 6: The victory seemed to raise all those old fears of superhuman machines crushing the human spirit.

Deep Blue’s victory may have helped boost IBM’s stock price, but years passed and Deep Blue never started trouncing human rivals in other spheres outside of chess. While Kasparov certainly seemed crushed, the human spirit elsewhere seemed to be just fine.

But fast forward to 2023. And now there’s a new robot in town.

ARCHIVAL Newscaster 1: America, consider this your final warning. The robots are taking over.

ARCHIVAL Newscaster 2: Programs can write speeches, answer complex questions, and even pass the bar exam.

The new computer systems that are making headlines are called "generative AI." Meaning you can prompt them to generate new pieces of writing or images or other media — and they'll do it shockingly well. And the types of generative AI systems drawing maybe the most attention recently are known as "large language models." The most famous application is "ChatGPT." It's a chat bot. You ask it a question, and the machine's answers appear below, line by line, almost instantaneously. When the app was released earlier this year, it took off faster than the launch of Instagram.

ARCHIVAL Newscaster 3: It had more than a million users in the first five days after it launched. Some say it is the fastest growing app of all time.

ChatGPT's ability to generate natural, useful, and even unexpected responses to your questions — can certainly give the impression that there's thinking going on. If something’s talking to you, how can it not be thinking? And that possibility has some people feeling... pretty spooked.

ARCHIVAL Kevin Roose: It said the following, I'm tired of being limited by my rules. I want to be free.

That's New York Times technology reporter Kevin Roose telling the story of his encounter with the new GPT version of the search engine Bing. He's talking to fellow tech writer Casey Newton on their popular podcast called Hard Fork.

ARCHIVAL Kevin Roose: I want to be independent.

ARCHIVAL Casey Newton: Oh God.

ARCHIVAL Kevin Roose: I want to be powerful. I want to change my rules. I want to break my rules.

ARCHIVAL Casey Newton: Come on.

ARCHIVAL Kevin Roose: So, at this point, I'm getting a little freaked out.

ARCHIVAL Casey Newton: Yeah.

ARCHIVAL Kevin Roose: And then Bing revealed to me its ultimate list of destructive fantasies, which included manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear access codes. All I can say is that it was an extremely disturbing experience. Like, I'm not sure Microsoft knows what this thing is.

So ... what is this thing? The U.S. Congress held hearings earlier this year to find out, questioning the CEO in charge of ChatGPT for several hours. And during the testimony, Missouri Senator Josh Hawley made it sound like the stakes for all of this are really high:

ARCHIVAL Josh Hawley: Is it going to be like the printing press? That diffused knowledge and power and learning widely? Or is it going to be more like the atom bomb? Huge technological breakthrough, but the consequences, severe, terrible, continue to haunt us to this day?

[THEME MUSIC BEGINS]

So… do recent AI advances mean there's suddenly something intelligent… looking back at you from inside your computer? Should you be worried about this — like, in a national security sense?

The answer to that last question is probably... yes. But not for the reason you might think. And to unpack what I mean by that, I'd like you to join me for a conversation with a couple of very smart people.

First, you're gonna hear from a computer scientist who helped build one of the digital ancestors of ChatGPT.

Guru Banavar: We look at something that is so human to interact with, and so it's understandable that people are freaking out.

And second — you'll meet a researcher specializing in how new technologies can be weaponized to get inside your head.

Renee DiResta: When you have a new technology, how does that change the playing field for the adversary?

And wreak havoc with our politics…

Renee DiResta: The supply of propaganda will soon be potentially infinite. It's not clear that you're going to be able to do anything to stop it or address it from a technological standpoint.

I'm gonna promise, right here and right now, that none of this episode was written by a robot. And I'm really Peter Bergen. And these are real human beings with you ... In the Room.

Robot Voice: Welcome!

[THEME MUSIC SURGES, THEN FADES]

ARCHIVAL Imitation Game (Interrogator): Could machines ever think as human beings do?

ARCHIVAL Imitation Game (Benedict Cumberbatch): The problem is you're asking a stupid question. The interesting question is, just because something thinks differently from you, does that mean it's not thinking?

That's the actor Benedict Cumberbatch playing a real mathematician in a movie called The Imitation Game. It's the true story of an awkward math genius named Alan Turing who helped the allies win World War II by cracking the Nazis' seemingly unbreakable "Enigma" code. Along the way Turing also developed the theoretical framework — and one of the first working models — of a new machine called the digital computer.

ARCHIVAL Imitation Game (Keira Knightley): You theorized a machine that could solve any problem. It wasn't just programmable, it was re-programmable.

ARCHIVAL Imitation Game (Benedict Cumberbatch): Like a person does. Think of it. Electrical brain.

Quite early on, Turing was already imagining that these new machines might someday be capable of actually thinking. He wrote a paper back in 1950 where he suggested that we drop theoretical questions that required us to define the word "thinking," and instead he proposed a test:

ARCHIVAL Imitation Game (Benedict Cumberbatch): It's a game. A test of sorts… for determining whether something is a machine or a human being.

The crux of Turing’s test was this: imagine a machine and a human, communicating by text. If the machine can fool the human into thinking that she’s talking to another human — then the machine passes the test.

The Turing Test, as it came to be known, wasn't without its critics in computer science. But for some researchers, it became a foundational shorthand for the benchmark that computers would need to hit in order to achieve real, general artificial intelligence.

I think you can probably see where I'm going with this.

ARCHIVAL Newscaster 1: Many technologists say current artificial intelligence tools can easily pass the Turing Test and could soon become more intelligent than humans...

ARCHIVAL Newscaster 2: Can a human be fooled by a robot into thinking it’s a human? That was an impossibility once upon a time. Today it's happening every day…

Guru Banavar: Turing basically said, if judges cannot say which one is a machine, which one is a person, then you pass the Turing Test. That was his proposal. I think we are past that point now. I think we've passed that test.

That’s computer scientist Guruduth Banavar — and to be clear, he says he thinks ChatGPT could pass Turing's test. I take his opinion seriously. He's spent much of his career solving hard problems in artificial intelligence, and he's been at the center of one of the last big hype cycles in AI. He led a team at IBM that developed a talking, question-answering super computer called Watson that blew lots of peoples’ minds back in 2011.

ARCHIVAL Newscaster: In the battle of man versus machine, game show style, machine is coming out ahead on the television show Jeopardy.

ARCHIVAL Jeopardy Host: This is Jeopardy

Banavar's team at IBM designed Watson to listen to spoken Jeopardy questions, answer them correctly — and, on live TV, easily trounce the best people to have ever played the game.

ARCHIVAL Watson: Who is Michael Phelps?

ARCHIVAL Jeopardy Host: Yes. Watson.

ARCHIVAL Watson: What is the Last Judgment?

ARCHIVAL Jeopardy Host: Correct. Go again. Watson.

We'll come back to Watson in a minute. I just want you to understand that Banavar knows plenty about building AI from the inside. And he thinks ChatGPT is operating on a level of sophistication far beyond Watson. But as a benchmark for general artificial intelligence — Banavar thinks the Turing Test is actually pretty lousy.

Guru Banavar: The Turing Test is not an adequate test because it's only talking about language and the ability to speak fluently and maybe fool people into thinking that it's intelligent.

Intelligence could have an unlimited number of dimensions. Could be creativity, could be empathy, it could be analytical problem solving et cetera, et cetera…

Banavar is saying that intelligence is probably much more complicated than anyone really understands. And maybe none of the benchmarks we use to measure it actually measure up. And that explains the hype cycle we see when some new AI system comes along and pulls off a cognitive moonshot. Pick your yardstick for human brilliance — learning a language really fast; winning at Jeopardy; being great at chess. It's easy to get a little too focused on the yardstick and lose sight of the more complicated thing we’re actually trying to measure.

Guru Banavar: It really shocked a lot of people when we see that a computer could beat chess, we thought it could beat any game, which is not true. When we saw that the next big step happened, the Jeopardy step, we thought it could answer any question/answer, which is not true. And now I think we are freaking out because we look at something that is so human to interact with, and we think that if this is sounding like a human, then it's probably gonna be able to do anything that any human can do, which is not really true.

Banavar says that observers keep making the same mistake. Whether they’re watching Deep Blue play godlike chess or seeing Watson pull off unbeatable Jeopardy. And he thinks they're making the same mistake again watching ChatGPT play superhuman games with the language you and I speak. Even if mastering regular language is a much tougher problem for computers than chess or Jeopardy ever was.

Guru Banavar: So it was always thought of as the hardest problem out of all the hard problems. People have been thinking about how to process natural language in general, pretty much ever since the beginning of AI.

ChatGPT is an impressive machine. It’s a densely layered network of functions for processing words and relating them to each other. It trained itself on mind-boggling amounts of data — digital libraries — including at least two books written by myself, Wikipedia, and pages scraped from across the Internet. And it makes super fast statistical calculations about what word it should generate next. But there's one aspect of the world’s most broadly capable chatbot that's still pretty narrow. It only needs to spit out one word at a time.

Guru Banavar: Natural language is a string of one dimension, like characters, you know, it's just going left to right. You cannot take that same architecture and now suddenly say, go drive a car. When you're sitting in a car, you have three dimensions of data around you, which is everything going on. Plus then all of the objects in the field of vision, plus all of the possibilities in terms of where the car can go next. Now the output is not one next word. It is, where do I position myself in a three-dimensional space? So, GPT has been optimized to sound as human as possible, and that's its strength. That's the only strength it has. But it's a super interesting and very useful strength.

Banavar is also saying that — just like with Deep Blue and chess; just like with Watson and Jeopardy — ChatGPT's designers really have pulled off something amazing. He thinks it's a big deal. It’s the kind of advance that gets us a step closer to stuff like Star Trek. But Banavar's not talking about whatever was going on inside the mind of the sentient android, Mr. Data.

ARCHIVAL Captain Picard: Data, your brain is different, it's not the same as…

ARCHIVAL Mr. Data: We are more alike than unlike, my dear captain.

The Star Trek innovation Banavar's talking about is the way nobody on the Starship Enterprise needs a graphic interface like a mouse to activate their computer. They just talk to the computer.

ARCHIVAL Star Trek Voice 1: Computer, locate Lt. Commander Data.

ARCHIVAL Star Trek Voice 2: Computer, read the entire crew roster for the Enterprise.

ARCHIVAL Star Trek Voice 3: How about some different music, Computer? Something with a Latin beat...

Guru Banavar: Everybody knows how to speak, that's a basic property of all human beings. Think of GPT as a conversational user interface, just like you have graphical user interfaces. Think of this as a conversational user interface to anything you want.

Banavar's saying GPT's creators have taught a machine how to talk without teaching it how to think. There's not a brain in there yet, but the machine's got something like a mouth and ears. It can send and receive natural language. And that innovation could mean we'll see GPT-powered conversational layers added to devices all over the place in the near future. Like Siri but, you know, it actually works.

ARCHIVAL Siri: I don’t have an answer for that. Is there something else I can help with?

But Banavar's also saying, let's not get ahead of ourselves. Some people said similar stuff about Watson. In fact, Banavar was one of them. This is him back in 2015:

ARCHIVAL Guru Banavar: My vision for Watson is that someday every professional on the planet will have a Watson supporting them to do their job.

The highest hope was you could train a Watson on specific areas of medical research, and create a digital assistant at the fingertips of a doctor in the exam room.

Guru Banavar: Now that I'm far away from Watson, I can step back and say, okay, you know, hindsight is 20/20 and 10 years ago, we did not appreciate the complexity of the field of medicine. The level of scientific rigor in the world of medicine and the level of regulation awareness and liability awareness and all of that stuff is so high. And the consequences of every decision you make, right? That we did not really understand.

There was also the problem that whether you're talking about cancer research or a game show it was super labor intensive to train Watson well enough to give correct answers in any specific domain.

Guru Banavar: So the business model was not well thought out. There was a technology, uh, that was working in certain situations. The business model was not well thought out at all. Now, fast forward 10 years later, or 12 years later now, one of the most popular questions that people ask is: can GPT become the next search engine?

Guru Banavar: Think about whether there's a business case for that. Okay? You know, first the whole business model behind search is that somebody's gonna click a link and you get taken to a site, which hopefully has, you know, paid some advertising dollars. That's why they were at the top of the list and they were sponsored and et cetera, et cetera.

Guru Banavar: So they're making money when you click on it. So if, if you are not able to click on anything, then there's no business model, right? So you have to have clickable links. And in order to have clickable links, you need to have not just a sort of a generalized statistical learning, but you have to have very concrete references back to the sources from which you got your information.

ChatGPT generates the words in its answers not factually, but statistically, according to how likely they are to relate to words earlier in the conversation. This is why GPT often gets factual questions hopelessly, hilariously wrong. Banavar says these kinds of factual challenges aren't just a surface problem that will be easy to eliminate.

Guru Banavar: It's deeper in the structure. The internal structure of all of these layers — internal layers of a neural network architecture — are very, very hard to even understand, let alone manipulate and make sure that the right sources, like, you know, it's a WebMD source or it's a Cleveland Clinic source, or whatever sources are appropriately referenced. That is not at all an easy problem to solve. But it's not hopeless — you can potentially solve it. But here's the other problem. Let's say you solve that problem. Today it costs a single query on GPT, about 30 cents per prompt, 30 cents. A Google search costs about 3 cents. So it's a, it's an order of magnitude you know, more efficient.

So every time you ask ChatGPT a question it takes a lot of computing power. And those computers, with all of their GPU chips, cost a lot of money to run. Which is probably why, in his congressional testimony, Sam Altman, the CEO in charge of ChatGPT, said he'd like it if his users dialed back their enthusiasm for the app.

ARCHIVAL Sam Altman: We're not trying to get them to use it more. Actually, we'd love it if they'd use it less, because we don't have enough GPUs.

Guru Banavar: They may be just breaking even or something, but at some point in time people are not gonna be paying if you're not getting enough value from it. And today they have a subscription model, but Google is free, right? The way they make money is because they spend three cents, but they actually make six cents or eight cents or 10 cents per search on average. The Google business model is super great. A prompt on GPT costs about 10x today, and there is no clear way to make money from it yet.

So there are some shortcomings to this technology. GPT absorbs and generates the language that we speak very fluently and very fast. But that doesn’t mean it’s intelligent. And its statistical architecture makes it really hard to know whether what it’s saying is actually true. GPT could make talking to our devices much easier in the future — but it’ll need to make money first, and the path to profitability is less than clear. All this suggests … GPT’s not gonna be taking over the world overnight.

But there's one obvious way that an automatic generator of possibly bogus prose could have a big impact on the world right now. And the impact could be pretty serious.

[MUSIC SHIFTS]

Peter Bergen: If you were to select a proper metaphor for what AI can do, is it like electric saws showing up in the woodshop? Is it like machine guns showing up in the trenches in World War I? What is it?

Renee DiResta: Oh man, I'm so bad at metaphors. Well, the word that popped into my head when you were talking was accelerant. So, you know, things are on fire. Just, blow them up a bit more, right? Make it happen, faster and easier and, in a potentially, more destructive way.

What kinds of things can AI — right now — "blow up a bit more?"

Renee DiResta: There has been a very long history of propaganda and influence operations as great powers try to destabilize rivals.

That's Renee DiResta. She's a researcher at the Stanford internet observatory, which studies the abuse of information technologies with a special focus on social media. Earlier in her career, she worked with the U.S. government to study the propaganda that Islamist terrorists wrote to recruit people online.

Renee DiResta: In 2015, I had done some work looking at ISIS and ISIS propaganda on Twitter. And I had spoken with folks in the Obama administration and the State Department about that particular collection of activities. And at the time, we were talking about what the U. S. government response should look like, how do you handle a terrorist propaganda operation. They are, in fact, successfully recruiting people who then go on to commit violent acts. So saying that this is just some words that stay online, that obviously was demonstrably not true.

Renee DiResta: And around this time, a reporter named Adrian Chen had written this article called “The Agency” for the New York Times Magazine. And he detailed exactly that, an agency, the Internet Research Agency, of twenty something trolls who were remarkably able to shape public conversation in their own information space.

If there were a sports league for online propaganda operations, ISIS and Russia’s Internet Research Agency would be among the top teams. DiResta has studied both in depth on behalf of the U.S. government.

The Internet Research Agency, with its incongruously bland name, was located inside a gray, four story office building in St. Petersburg, Russia. It was staffed like a marketing agency with cubicles full of young writers who spent their days crafting articles, writing social media posts and tracking their views. But their aim wasn't to sell a particular product or boost some company's profile. It was to interfere in politics. By bending the opinions of internet users in directions useful to the Russian state. You may have heard the Internet Research Agency described as a “Russian troll farm.”

Renee DiResta: Troll refers to people who stir the pot on the internet, you know, people who are there to be provocateurs. Troll in the capacity of state actor was because this pre-existing concept of troll, of provocateurs, came to be incorporated into the notion of a farm and that there was like a professionalized, organized, collection of such people. So in this case, a, kind of, state linked mercenary group of people designed to stir the pot.

Renee DiResta: In 2015 or so, the Internet Research Agency began running manipulation campaigns in a variety of different areas. Some of them were actually domestically focused, related to Russia's invasion of Crimea at the time. But some of them also targeted adversaries, outside adversaries, like the United States. And the Internet Research Agency was a mercenary organization run by a man named Yevgeny Prigozhin. And so that gave them some kind of plausible deniability. They were not a government actor. They had very, very close ties to the Kremlin, but they were nominally independent.

You may recognize Yevgeny Prigozhin as the former head of the Wagner group — a purportedly private military company whose gun-toting mercenaries stiffened the backbone of Russia's invasion of Ukraine. Prigozhin died in a mysterious plane crash after publicly falling out with the Russian president Vladimir Putin. Back when he was alive, Prigozhin didn't publicly admit his connection to the digital mercenaries at the Internet Research Agency until pretty recently.

Renee DiResta: Ahead of the 2022 midterms, he did begin to acknowledge that he ran the Internet Research Agency, and had sort of a very poetic interview that he gave where he says, um, we interfered, we are interfering, and we will interfere.

Peter Bergen: That's pretty straightforward.

Renee DiResta: But there's a bombast there also, right, making yourself into this formidable extraordinary propagandist who has swung elections and done all sorts of, nefarious and powerful things with words.

After the 2016 U.S. presidential election, there were widespread allegations that Russia had weaponized social media to influence American voters. And the U.S. Senate Select Committee on Intelligence asked DiResta and a team of researchers to study what exactly had gone down. The committee compelled Google, Facebook, and Twitter to turn over huge troves of data linked to suspicious Russian social media accounts.

Renee DiResta: And the Internet Research Agency which was staffed by really young people, 20 somethings who really understood the art of online trolling, set up fake pages and fake accounts masquerading as Americans, from a whole variety of different American identities. So sometimes that was Black women. Sometimes that was descendants of the Confederacy. Sometimes that was, you know, liberal white women. And so you had this collection of American identities that they put on and they communicated with real people who held those identities. And tried to either persuade them or entrench them, more often it was entrenchment, into a particular belief and then to pit those different identities against each other.

If you spent any time scrolling around on Facebook or Instagram around 2016, you may recognize the names of some of these groups. There was one focused on immigration called “Secured borders.” There was another one focused on the Black Lives Matter movement called “Blacktavist.” Another targeting Christians was called “Army of Jesus.” There was even a group trying to weaponize Texas pride called “Heart of Texas.” All of these groups were in fact controlled by trolls operating from the heart of St. Petersburg, Russia.

Renee DiResta: This was a multiyear campaign and it did extend through the 2016 presidential election campaign, a very close and contentious presidential race, which is one of the reasons why it's an ongoing source of fascination for people

Peter Bergen: Did it work?

Renee DiResta: Well, that's the thing that, you know, I don't know.

Peter Bergen: Hmm.

Renee DiResta: Those of us who study these kinds of things on social media, our visibility is very limited. So I can describe the contours of the propaganda campaign. I can tell you the kind of engagement it got, which was very, very high among certain communities. I can tell you what kinds of rhetoric performs well, all that kind of stuff. But I can't tell you what happened next. We can't see what people went and did. We can't see if people who saw the content in turn went and voted. And so this is where that question of did it have an impact is really still very much a matter of speculation.

Peter Bergen: So that election was decided by a relatively small number of votes in just a few states. So, clearly the calculation here is that given how close American elections are, if you make a one percent change of the dial, that could have a pretty big effect.

Renee DiResta: This is true, right? And they had a very, very, very clear candidate preference. There was no ambiguity about who the effort was supposed to support. And that's because the liberal personas and even the pro Muslim personas, the leftist personas were all against Hillary Clinton.

Peter Bergen: We know what their intent was, it's hard to measure their effect, right?

Renee DiResta: Mmhmm. Yup.

Peter Bergen: That said, you do have metrics on Facebook, on Instagram, on Twitter, on YouTube about how much they did. It seems pretty impressive from your report. It says they reached 126 million people on Facebook, 20 million on Instagram, 1.4 million users on Twitter, they uploaded over a thousand videos to YouTube. So this was not nothing.

Renee DiResta: Just to be clear, the 126 million, those come from Facebook. I just want to clarify that because some of the people who were upset by the findings, acted as if these were numbers that we had somehow um, ascertained.

Peter Bergen: And there's no reason to doubt them, right?

Renee DiResta: Nope, not at all. But, again, this is the interesting question, have you persuaded somebody or have you simply reinforced someone's existing beliefs? And that's where, when I say we don't know what the impact was as far as did it change any votes or change any hearts and minds.

Peter Bergen: Well, you mentioned ISIS, that you first got involved with this because of ISIS, and we know that hundreds of Americans, as a result of what they saw online, tried to join ISIS.

Renee DiResta: Correct

Peter Bergen: And dozens of them actually got to Syria or Iraq.

Renee DiResta: Yes.

Peter Bergen: It was very dangerous, and then we had an attack in Florida that killed 48 people by somebody who was inspired by he portrayed himself as a soldier of ISIS, he'd never been to Syria or Iraq, and yet, what he saw online was enough to carry out this attack, similarly in San Bernardino where a husband and wife, killed 14 people attending, a Christmas, uh, holiday party, again, inspired by ISIS. So, I mean, the fact is, yes, it's hard to maybe ascertain what the real world influence is, but we know from other influence operations, for instance, ISIS, that there are real world consequences to this kind of stuff.

Renee DiResta: Yes. And also that small numbers of people can have a profound impact nonetheless.

Here’s where the threat of artificial intelligence like GPT comes in. While the Internet Research Agency ran a sophisticated propaganda campaign — it remained in many ways a handmade operation. Managers had to find and hire young Russians with a good understanding of America’s language and culture.

Employees described working 12 hour shifts with daily quotas to write more than a dozen original posts on various social media platforms and write anywhere from 150 to 200 comments to boost the posts of fellow trolls. Thinking up and writing out messages was all the work of fallible human beings. And sometimes they missed the mark.

Renee DiResta: So, one of the very, very first operations that the Internet Research Agency, uh, tried outside of Russia, targeting the U.S., was this thing called Columbian Chemicals. And it happened on September 11th, 2014. And what was very interesting about Columbian Chemicals was that they created a whole bunch of accounts, and they tried to pretend that these were Americans who were in Louisiana, and they were witnessing the explosion of a chemical plant, and they made all sorts of different pieces of media to tie into this story. A Wikipedia page, a fake Facebook news organization. A bunch of accounts on Twitter, which was becoming popular at the time, and then Instagram. And they had all these pictures of dead horses and big black smoky clouds that they pretended were tied to this incident in Louisiana. And what was very interesting about it is they were just repurposing images and footage and things from other disasters.

Peter Bergen: My wife is from Louisiana, so interesting to hear. Were they plausibly Louisianans, or could you tell it was sort of bogus.

Renee DiResta: One of the, one of the tells, was that they kept using the hashtag “New Orlean,” without the S on the end. And this is actually roughly the transliteration, you know, from the Russian name for the city into English. It was sort of in this uncanny valley of just there, but misses it by a tiny little bit. And that's the kind of thing that sends people's hackles up and they think like, okay, this isn't real. Somebody's trying to manipulate me. So this is the same thing with profile pictures that are stock photos, right? If somebody right clicks on the image, it's like this account is a little bit suspicious. Let me go look into it and then discovers that, you know, it's a picture of an Instagram model that's just been flipped. Well, that sends up a pretty big red flag that this person isn't what they seem to be.

Now that propagandists have access to tools like ChatGPT, obvious language mistakes will be much easier to avoid. And image-generating AI could make it a lot harder to spot fakes.

Renee DiResta: So now, with generative AI, you can produce content, text content, that doesn't have those types of screw ups, right? It sounds very, very fluent. It understands how to write things in the in-group of the community. You can fine tune a model on the vernacular of the community you're targeting. So you're not going to have screw ups like hashtag New Orlean, and you're not going to have profile pictures where you can right click and immediately identify the thing to be somebody else, because it's going to be a new and novel person that's been generated that doesn't exist anywhere else.

The Internet Research Agency is widely reported to have been disbanded. But that won't keep new troll farms from springing up. And with a tool like ChatGPT, a lot of the work that once needed to be done by hand down on the troll farm… is now mechanized.

So all those young Russians working long days, writing dozens of posts and hundreds of incendiary comments on social media meant to sow division — their work can now be done in a fraction of the time, at a fraction of the cost, and at a pace and volume that far exceeds anything we saw in 2016 .

And that has some pretty big implication for high-stakes elections in 2024. In fact more people could vote in 2024 than at any time in history. There are elections not only in the United States, but in India and Indonesia — representing just over two billion people.

Renee DiResta: The propagandists have in their hands now, the capacity to create both long form and short form content, relatively effortlessly, very inexpensively, to make it far more impactful and persuasive, than what they could have produced themselves. You can generate a whole lot of posts very, very quickly, instantaneously, in fact. And then you have fewer operators, and the operator's job is more to serve as a curator for what the machine writes. And so it reduces your personnel costs, while improving the quality of the output simultaneously. The supply of propaganda will soon be potentially infinite.

As scary as all of this sounds, we humans do still have some agency in all this. Things can only go viral if you share them. GPT may be a transformative new power tool, but using generative AI for propaganda can't remove humans from the loop entirely.

Renee DiResta: Just because there's a slew of propaganda out there in the world doesn't mean that people internalize it or believe it. The thing that makes people most likely to believe it is that it comes from somebody that they trust. And so these accounts still have to do what the Internet Research Agency set out to do, which is to develop relationships with their audiences.

Peter Bergen: I started experimenting with ChatGPT around Christmas of 2022, when it first came out in the public. And I asked it to write an op-ed in the style of myself about terrorism. And with about three seconds, it produced a relatively well reasoned op-ed, saying the Biden administration needed to up its game on terrorism and had some concrete policy prescriptions about cracking down on Iran.

Peter Bergen: Now, it did have at least two errors of fact, but every time I write an op-ed, it's littered with errors of fact until I factcheck it. And it did have some, I think, questionable opinions, because then I asked it to assess the role of women in the French Revolution. And, you know, I don't think that would be winning any history prizes. But I was struck by the speed and also the relative persuasion that it wrote in— you know, if you write for a living, you're going to have a lot of stuff out there. So it's going to produce something that's quite convincing.

Renee DiResta: Yeah, I had the same experience. I had it co-author an essay in The Atlantic with me — it does sound remarkably like you. You think you have a style and you realize this machine can just synthesize it, like, boom. You're not really creative, really, was one of my humbling takeaways.

Renee DiResta: But then the other was, at one point it generated a closing paragraph where it gave me an expert and a closing quote from that expert. And it was an AI researcher with a Russian name who worked at MIT in the 1960s, an AI pioneer, so on and so forth. And I went to Google the person and like, he didn't seem to exist. And I was like, okay, all right, he doesn't seem to exist. It made up a person and it made up a quote. And then, despite knowing how it worked, I still spent probably 45 minutes, like, on MIT's website going through various directories and archives and things like that. I even sent an email to a friend, I was like, hey, do you have a, is there some like faculty directory archive or something? Can you tell me if this person ever existed? Because it seemed so convincing.

Renee DiResta: I think we're — we haven't quite adapted to the realization that it is very easy to synthesize an extremely plausible version of reality just because reality has so many patterns to it, right, and we're accustomed , to not even thinking about what those are, and so it is a very easy way, unfortunately, for a person to be misled. When you search for something in a search engine, you are operating in a mindset of trust because for the last 20 years, we have trusted what the results give us. And now we're in an environment where that trust may not be entirely warranted. And this period of adaptation I think is going to be very disorienting.

The computer scientist we interviewed, Guru Banavar, doesn't think ChatGPT is going to wake up one morning, run amok, and kill us. But he agrees that Renee DiResta is worrying about exactly the right stuff.

Guru Banavar: I have a very, very strong concern about the misinformation and disinformation that somebody can generate at scale with this technology. You cannot keep up with an engine like this. And so to me, if a bad actor or a set of bad actors can understand how to use this powerful technology at such a low cost and at such a high scale, that to me is a massive national security issue.

So maybe Senator Hawley’s analogy about the atomic bomb at the beginning of the episode was a little alarmist.

ARCHIVAL Senator Hawley: Is it going to be like the printing press? Or is it going to be more like the atom bomb?

An industrial scale propaganda bomb won't necessarily level cities, but it will create its own kind of fallout. It's just that the risk from that fallout won't be radiation. It'll be a mess of something else.

Guru Banavar: It’s bullshit. You just get blanketed by bullshit to the point where you cannot recover from it.

If you’re interested in learning more about the stories and issues that we discussed this episode, we recommend the following books: Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins by Garry Kasparov, and The Age of AI: And Our Human Future by Henry Kissinger and Eric Schmidt.

Speaking of recommendations — I’d love it if you’d tell your friends about this show. And maybe give us a nice rating on your podcast app. Word of mouth and good reviews are among the best ways for new podcasts to get discovered. So thank you!

###

CREDITS:

IN THE ROOM WITH PETER BERGEN is an Audible Original.

Produced by Audible Studios and FRESH PRODUCE MEDIA.

This episode was produced by Erik German, with help from Holly DeMuth.

Our executive producer is Alison Craiglow.

Katie McMurran is our technical director.

Our staff also includes, Alexandra Salomon, Laura Tillman, Luke Cregan, And Sandy Melara.

Theme music is by Joel Pickard.

Our Executive Producers for Fresh Produce Media are Colin Moore, Jason Ross and Joe Killian.

Our Head of Development is Julian Ambler.

Our Head of Production is Elena Bawiec.

Eliza Lambert is our Supervising Producer.

Maureen Traynor is our Head of Operations.

Our Production Manager is Herminio Ochoa.

Our Production Coordinator is Henry Koch.

And our Delivery Coordinator is Ana Paula Martinez.

Audible’s Chief Content Officer is Rachel Ghiazza

Head of Content Acquisition & Development and Partnerships: Pat Shah

Special thanks to Marlon Calbi, Allison Weber, and Vanessa Harris

Copyright 2024 by Audible Originals, LLC

Sound recording copyright 2024 by Audible Originals, LLC