Updated April 27, 2023 at 6:11 PM ET

This week, the Republican National Committee used artificial intelligence to create a 30-second ad imagining what President Joe Biden's second term might look like.

It depicts a string of fictional crises, from a Chinese invasion of Taiwan to the shutdown of the city of San Francisco, illustrated with fake images and news reports. A small disclaimer in the upper left says the video was "Built with AI imagery."

The ad was just the latest instance of AI blurring the line between real and make believe. In the past few weeks, fake images of former President Donald Trump scuffling with police went viral. So did an AI-generated picture of Pope Francis wearing a stylish puffy coat and a fake song using cloned voices of pop stars Drake and The Weeknd.

Artificial intelligence is quickly getting better at mimicking reality, raising big questions over how to regulate it. And as tech companies unleash the ability for anyone to create fake images, synthetic audio and video, and text that sounds convincingly human, even experts admit they're stumped.

"I look at these generations multiple times a day and I have a very hard time telling them apart. It's going to be a tough road ahead," said Irene Solaiman, a safety and policy expert at the AI company Hugging Face.

Solaiman focuses on making AI work better for everyone. That includes thinking a lot about how these technologies can be misused to generate political propaganda, manipulate elections, and create fake histories or videos of things that never happened.

Some of those risks are already here. For several years, AI has been used to digitally insert unwitting women's faces into porn videos. These deepfakes sometimes target celebrities and other times are used to take revenge on private citizens.

It underscores that the risks from AI are not just what the technology can do — they're also about how we as a society respond to these tools.

"One of my biggest frustrations that I'm shouting from the mountaintops in my field is that a lot of the problems that we're seeing with AI are not engineering problems," Solaiman said.

Technical solutions struggling to keep up

There's no silver bullet for distinguishing AI-generated content from that made by humans.

Technical solutions do exist, like software that can detect AI output, and AI tools that watermark the images or text they produce.

Another approach goes by the clunky name content provenance. The goal is to make it clear where digital media — both real and synthetic — comes from.

The goal is to let people easily "identify what type of content this is," said Jeff McGregor, CEO of Truepic, a company working on digital content verification. "Was it created by human? Was it created by a computer? When was it created? Where was it created?"

But all of these technical responses have shortcomings. There's not yet a universal standard for identifying real or fake content. Detectors don't catch everything, and must constantly be updated as AI technology advances. Open source AI models may not include watermarks.

Laws, regulations, media literacy

That's why those working on AI policy and safety say a mix of responses are needed.

Laws and regulation will have to play a role, at least in some of the highest-risk areas, said Matthew Ferraro, an attorney at WilmerHale and an expert in legal issues around AI.

"It's going to be, probably, nonconsensual deepfake pornography or deepfakes of election candidates or state election workers in very specific contexts," he said.

Ten states already ban some kinds of deepfakes, mainly pornography. Texas and California have laws barring deepfakes targeting candidates for office.

Copyright law is also an option in some cases. That's what Drake and The Weeknd's label, Universal Music Group, has invoked to get the song impersonating their voices pulled from streaming platforms.

When it comes to regulation, the Biden administration and Congress have signaled their intentions to do something. But as with other matters of tech policy, the European Union is leading the way with the forthcoming AI Act, a set of rules meant to put guardrails on how AI can be used.

Tech companies, however, are already making their AI tools available to billions of people, and incorporating them into apps and software many of us use every day.

That means for better or worse, sorting fact from AI fiction requires people to be savvier media consumers, though it doesn't mean reinventing the wheel. Propaganda, medical misinformation and false claims about elections are problems that predate AI.

"We should be looking at the various ways of mitigating these risks that we already have and thinking about how to adapt them to AI," said Princeton University computer science professor Arvind Narayanan.

That includes efforts like fact-checking, and asking yourself whether what you're seeing can be corroborated, which Solaiman calls "people literacy."

"Just be skeptical, fact-check anything that could have a large impact on your life or democratic processes," she said.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Transcript

STEVE INSKEEP, HOST:

When President Biden announced his bid for a second term this week, here is how the Republican National Committee responded. They used artificial intelligence to create a 30-second ad imagining what President Biden's second term might look like, complete with fake news reports.

(SOUNDBITE OF POLITICAL AD)

UNIDENTIFIED PERSON #1: This morning, an emboldened China invades Taiwan.

UNIDENTIFIED PERSON #2: Financial markets are in freefall as 500...

INSKEEP: A little disclaimer said the video was, quote, "built with AI imagery." NPR's Shannon Bond reports that as technology gets better at faking reality, some people ask how to regulate it.

SHANNON BOND, BYLINE: That GOP ad was just the latest instance of AI blurring the line between real and make-believe. In the past few weeks, fake images of former President Donald Trump scuffling with police went viral, so did an imagined picture of Pope Francis wearing a stylish puffy coat and a fake song using cloned voices of pop stars Drake and The Weeknd. As AI tools unleash the ability for anyone to create fake images, synthetic audio and video and text that sounds convincingly human, even experts admit they're stumped.

IRENE SOLAIMAN: I look at these generations multiple times a day, and I have a very hard time telling them apart. It's going to be a tough road ahead.

BOND: Irene Solaiman is a safety and policy expert at the AI company Hugging Face. She focuses on making AI work better for everyone, which includes thinking a lot about how these technologies can be misused to generate political propaganda, manipulate elections and create fake histories or videos of things that never happened. Some of those risks are already here. For several years, AI has been used to put women's faces in porn videos, sometimes targeting celebrities and other times to take revenge on private citizens. Solaiman worries things will get worse.

SOLAIMAN: One of my biggest frustrations that I'm shouting from the mountaintops in my field is that a lot of the problems that we're seeing with AI are not engineering problems.

BOND: When it comes to helping people tell apart human and AI-generated content, one thing is clear - there's no silver bullet. There are technical solutions, like software that can detect AI output and AI tools that watermark the images or text they produce. Another approach goes by the clunky name content provenance. The goal is to make it clear where digital media, both real and synthetic, comes from. Jeff McGregor is CEO of Truepic, a company working on verifying digital content with a special signature that tells consumers...

JEFFREY MCGREGOR: Was it created by a human? Was it created by a computer? When was it created? Where was it created?

BOND: But there's not yet a universal standard for identifying real or fake content. Detectors don't catch everything and must constantly be updated as AI technology advances. Open-source AI models may not include watermarks. That's why Solaiman and others working on AI policy and safety say we need a mix of responses. Laws and regulation will have to play a role, at least in some of the highest risk areas, says Matthew Ferraro, an attorney and expert in legal issues around AI.

MATTHEW FERRARO: It's going to be, probably, nonconsensual deepfake pornography or deepfakes of election candidates or state election workers in very specific contexts.

BOND: In the case of that AI-generated Drake song, his record label is using copyright law to get it taken down. On regulation, Europe is leading the way with a forthcoming set of rules meant to put guardrails on how AI can be used. But tech companies are already making their AI tools available to billions of people and incorporating them into apps and software many of us use every day. And that means, for better or worse, sorting fact from AI fiction requires us all to be savvier media consumers. Princeton University computer science professor Arvind Narayanan says we don't need to reinvent the wheel. Propaganda, medical misinformation and false claims about elections are problems that predate AI.

ARVIND NARAYANAN: We should be looking at the various ways of mitigating these risks that we already have and thinking about how to adapt them to AI.

BOND: So check your sources. Ask yourself if what you're seeing can be corroborated or fact checked. And bring a healthy dose of skepticism the next time you see a funny picture of the pope.

Shannon Bond, NPR News. Transcript provided by NPR, Copyright NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate