Home Conservative Deepfakes’ creep from porn to politics could upend election, experts warn (The...

Deepfakes’ creep from porn to politics could upend election, experts warn (The Washington Times)

34
0

Deepfakes’ creep from porn to politics could upend election, experts warn – By Dan Boylan (The Washington Times) / Oct 23 2019

Politicians, high-tech firms and the media have sounded the alarm about deepfake videos as a primary threat to the democratic process.

But the vast majority of digitally altered videos that have been produced and distributed feature pornographic images of celebrities, even as political parties and internet firms are developing ways to counter the technology.

Deeptrace Labs, a cybersecurity firm based in Amsterdam, monitors the internet for deepfake content. It reports that the number of fake images and videos soared from nearly 8,000 in December to about 15,000 in July — with 96% of the material pornographic in nature.

Money is the driving force behind deepfake pornography on the internet, where porn websites run lucrative advertising schemes, Deeptrace said in an email, adding that the top four porn sites have received more than 134 million views of their deepfake content.

“We found that the deepfake pornography ecosystem is almost entirely supported by dedicated deepfake pornography websites,” said Giorgio Patrini, Deeptrace’s chief scientist.

Many social media sites have banned deepfake porn, but it remains accessible on other sites where income is generated via pop-up Viagra ads or pay-per-view live sex shows.

A deepfake porn creation app has charged from $1 for a single photo of someone to be digitally stripped of clothing to $20 for a month’s unlimited access, according to Deeptrace.

The vast majority of porn deepfakes target women, Mr. Patrini said.

Last year, actress Scarlett Johansson, an early victim of the technology, expressed anger and frustration but acknowledged it is difficult to fight.

“I think it’s a useless pursuit, legally, mostly because the internet is a vast wormhole of darkness that eats itself,” she told The Washington Post.

Still, cybersecurity specialists warn that deepfake political content is expected to evolve rapidly as the 2020 elections near. They say the timing of release of bogus videos, not necessarily the quantity of faux content, will be the major problem.

“We have no idea how this will play out, but it could only take one or two potent political deepfakes, that are timed properly, to potentially swing an election,” Marc Berkman, executive director of the nonprofit Organization for Social Media Safety, told The Washington Times.

An example of political deepfake is a video released in May that purportedly shows House Speaker Nancy Pelosi slurring her words during a speech. The footage spread quickly across Twitter, YouTube and Facebook, and was ultimately viewed more than 2 million times. Cyberanalysts attributed the slurring to video manipulators slowing Ms. Pelosi’s speech and stretching her words without altering the natural pitch of her voice.

Photos and videos have been doctored for decades. The process is usually labor intensive and often produces questionable results. The technology behind deepfakes, however, relies on artificial intelligence.

“Deepfake” is a combination of the words “fake” and “deep learning,” a type of artificial intelligence also known as “machine learning” in which a computer program carries out specific tasks without explicit instructions. Using human techniques such as trial and error, inference and approximation, a deepfake program superimposes images and spends hours seamlessly stitching them together to create a bogus video.

On the federal level, officials say the red warning signals are flashing.

CIA Director Gina Haspel; Army Gen. Paul Nakasone, director of the National Security Agency and U.S. Cyber Command; FBI Director Christopher A. Wray; and then-Director of National Intelligence Dan Coats highlighted deepfakes at an annual worldwide threat assessment to Congress this year.

“Adversaries and strategic competitors probably will attempt to use deepfakes or similar machine-learning technologies to create convincing, but false, image, audio and video files to augment influence campaigns directed against the United States and our allies and partners,” Mr. Coats said in his prepared testimony.

‘A constantly evolving problem’
Capitol Hill lawmakers have yet to pass any significant deepfake legislation.

Asked how it is addressing deepfake concerns, the Republican National Committee did not provide information, citing operational security. But Republican officials said they are constantly reminding party members to be vigilant and report anything that appears suspicious to their digital team.

Meanwhile, the Democratic National Committee, which dealt with a major cybersecurity breach in 2016 when presidential nominee Hillary Clinton’s emails were hacked and posted online, took a novel approach to highlighting deepfakes.

At the world’s largest annual gathering for hackers this August in Las Vegas, the DNC showed video of its chairman, Tom Perez, apologizing for not attending the event. The video was a deepfake.

Two weeks ago, two leading members of the Senate Select Committee on Intelligence — Mark R. Warner, Virginia Democrat, and Marco Rubio, Florida Republican — lashed out at the nation’s top online media platforms Facebook, Twitter, YouTube, Reddit and LinkedIn for not doing enough.

“Despite numerous conversations, meetings and public testimony acknowledging your responsibilities to the public, there has been limited progress in creating industrywide standards on the pressing issue of deepfakes and synthetic media,” the senators wrote in a letter to Facebook CEO Mark Zuckerberg, himself a victim of a deepfake video.

Mr. Zuckerberg’s deepfake, which first surfaced in July, depicted the 35-year-old billionaire explaining that his social media firm’s success was based on a secretive organization.

Facebook, which has been in the firing line since Russian operatives dumped propaganda onto the social media platform during the 2016 election season, recently donated $10 million to artificial intelligence researchers battling deepfakes.

“This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Facebook’s chief technology officer, Mike Schroepfer, wrote in a blog that the company shared with The Times.

Google assisted researchers this month by releasing a database of 3,000 deepfakes in a project partnering its tech incubator, Jigsaw, with the Technical University of Munich and the University Federico II of Naples’ new FaceForensics benchmark program.

Deepfake technology first emerged in 2017. Cybersecurity analysts say the production of believable phony content, for minimal cost and effort, has jumped to an entirely new level in recent months.

They hope that biodata, the unique biometric features of a person’s face or voice, could help stop the fakes.

Pierre Bourgeix, president of Ohio-based cybersecurity firm ESI Convergent, said marrying biodata with multifactor authentication, such as passwords or PIN codes, shows promise but could trigger “a massive surge in biometric hacking.”

“This goes beyond a national security problem into the realm of philosophy,” he told The Times. “How will we know the truth in the future?”

Scarlett Johansson arrive at the premiere of “Avengers: Endgame” at the Los Angeles Convention Center on Monday, April 22, 2019. (Photo by Jordan Strauss/Invision/AP)

https://www.washingtontimes.com/news/2019/oct/23/deepfakes-creep-celebrity-nudes-politics-could-upe/

LEAVE A REPLY

Please enter your comment!
Please enter your name here