Deep faking it: America’s 2024 election faces the AI ​​boom

by admin
Deep faking it: America’s 2024 election faces the AI ​​boom

May 30 (Reuters) – (Note: Strong language in paragraph 10)

“I actually really like Ron DeSantis,” Hillary Clinton reveals in a surprising online ad video. “He’s just the guy this country needs, and I really mean it.”

Joe Biden finally lets the mask slip, unleashing a vicious rant against a transgender man. “You’ll never be a real woman,” growls the president.

Welcome to America’s 2024 presidential race, where reality is up for grabs.

Clinton and Biden deepfakes — realistic yet fabricated videos created by AI algorithms trained on a wealth of online footage — are among thousands surfacing on social media, blurring fact and fiction in the polarized world of American politics.

Although such synthetic media have existed for several years, in the past year they have been boosted by a range of new “generative AI” tools such as Midjourney that make it cheap and easy to create convincing deep fakes, according to Reuters interviews with about two dozen experts in the fields , including AI, online disinformation and political activism.

“It will be very difficult for voters to tell the real from the fake. And you can just imagine how Trump supporters or Biden supporters could use this technology to make the opponent look bad,” said Darrell West, a senior fellow at the institution’s Brookings Center for Technological Innovation.

“There could be things that drop right before the election that nobody has a chance to take down.”

Tools that can generate deep fakes are being released with few or imperfect safeguards to prevent harmful misinformation as the tech sector engages in an AI arms race, said Aza Raskin, co-founder of the Center for Human Technology, a nonprofit organization that which studies the impact of technology on society.

Former President Donald Trump, who will compete with DeSantis and others for the Republican nomination to take on Biden, himself shared a spoofed video of CNN anchor Anderson Cooper earlier this month on his social media platform Truth Social.

“It was President Donald J. Trump who ripped us off like a new ass here on CNN’s Chamber of the Presidents Live,” Cooper says in the footage, though the words don’t match the movement of his lips.

CNN said the video was a deep fake. A representative for Trump did not respond to a request for comment on the clip, which was still on his son Donald Jr.’s Twitter page this week.

Although major social media platforms such as Facebook, Twitter and YouTube have made efforts to ban and remove deepfakes, their effectiveness in controlling such content varies.

DEEPFAKE PENCE, NOT TRUMP

There are three times as many video deepfakes of any kind and eight times as many voice deepfakes posted online this year compared to the same time period in 2022, according to DeepMedia, a company working on synthetic media detection tools.

A total of around 500,000 video and voice deepfakes will be shared on social media sites worldwide in 2023, DeepMedia estimates. Voice cloning cost $10,000 for a server and AI-training costs until late last year, but startups are now offering it for a few dollars, it said.

According to the people interviewed, no one is sure where the generative AI path leads or how to effectively guard against its power for mass disinformation.

Industry leader OpenAI, which changed the game in recent months with the release of ChatGPT and the updated GPT-4 model, is grappling with the problem itself. CEO Sam Altman told Congress this month that election integrity was a “significant area of ​​concern” and called for swift regulation of the sector.

Unlike some smaller startups, OpenAI has taken steps to limit the use of its products in politics, according to a Reuters analysis of the terms of use of half a dozen leading companies offering generative-AI services.

However, the guardrails have gaps.

For example, OpenAI says it prohibits its image generator DALL-E from creating public personas — and indeed, when Reuters tried to create images of Trump and Biden, the request was blocked and a message appeared that it “may not follow our policy for content. “

Still, Reuters was able to produce images of at least a dozen other US politicians, including former Vice President Mike Pence, who is also weighing a 2024 bid for the White House.

OpenAI also restricts any “scaled” use of its products for political purposes. This prohibits using its AI to send mass personalized emails to voters, for example.

The company, which is backed by Microsoft, explained its political policies to Reuters in an interview, but did not respond to further requests for comment on gaps in its policy enforcement, such as blocking the impersonation of politicians.

Several smaller startups have no express restrictions on political content.

Midjourney, which launched last year, is the leading player in AI-generated imagery with 16 million users on its official Discord server. The app, which ranges from free to $60 a month depending on factors such as the amount of photos and speed, is a favorite of AI designers and artists because of its ability to generate hyper-realistic images of celebrities and politicians, according to four AI researchers and creators interviewed.

Midjourney did not respond to a request for comment for this article. During an online Discord chat last week, CEO David Holtz said the company would likely make changes before the election to combat misinformation.

Midjourney wants to collaborate on an industry solution to enable tracking of AI-generated images with a digital equivalent of a watermark, and will consider blocking images of political candidates, Holtz added.

AI-GENERATED REPUBLICAN AD

Even as the industry grapples with how to prevent abuse, some political players themselves are looking to harness the power of generative AI to shape campaigns.

So far, the only high-profile AI-generated political ad in the US was one released by the Republican National Committee in late April. The 30-second ad, which the RNC revealed was entirely AI-generated, used fake images to suggest a cataclysm if Biden is re-elected, with China invading Taiwan and San Francisco being shut down by crime.

The RNC did not respond to requests for comment about the ad or the broader use of AI. The Democratic National Committee declined to comment on the use of the technology.

Reuters polled all Republican presidential campaigns on their use of AI. Most did not respond, though Nikki Haley’s team said it was not using the technology, and the campaign of prospective candidate Perry Johnson said it was using AI to “generate copies and iterate,” without elaborating.

The potential for generative AI to produce emails, posts and campaign ads is compelling to some activists, who believe the low-cost technology could level the playing field in elections.

Even deep in rural Hillsdale, Michigan, machine intelligence is on the march.

John Smith, the Republican chairman for Michigan’s 5th congressional district, is holding several educational meetings for his allies to learn how to use AI for social media and ad generation.

“AI helps us play against the big cats,” he said. “I see the biggest uptick in local racing. Someone who is 65 years old, a farmer and a county commissioner, he could easily be picked off by a younger cat using the technology.”

Political consultants are also looking to use AI, further blurring the line between real and unreal.

Numinar Analytics, a political data company that focuses on Republican clients, has begun experimenting with AI content generation for audio and images, as well as voice generation to potentially create personalized messages in a candidate’s voice, founder Will Long said in an interview .

Democratic research and strategy group Honan Strategy Group, meanwhile, is trying to develop an AI research bot. It hopes to develop a female bot in time for the 2023 municipal elections, CEO Bradley Honan said, citing research that found both men and women were more likely to speak to a female interviewer.

Reporting by Alexandra Ulmer and Anna Tong in San Francisco; Additional reporting by Christina Anagnostopoulos, Zeba Siddiqui and Jonathan Spicer; Editing by Kenneth Lee, Ross Colvin and Pravin Char

Our standards: The Thomson Reuters Trust Principles.

Alexandra Ulmer

Thomson Reuters

A US national affairs correspondent who spent four years in Venezuela covering President Maduro’s administration and the humanitarian crisis, and has also reported from Chile, Argentina and India. She was Reuters Reporter of the Year in 2015 and a leading member of a team that won the Overseas Press Club Award for Best Latin American Coverage in 2018.

Anna Tong

Thomson Reuters

Anna Tong is a Reuters correspondent based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Previously, Tong worked at tech startups as a product manager and at Google, where she worked in consumer insights and helped run a call center. Tong graduated from Harvard University. Contact: 4152373211

Source Link

You may also like