2

What fake images of Trump with Black voters tell us about AI disinformation

 6 months ago
source link: https://www.washingtonpost.com/politics/2024/03/06/what-fake-images-trump-with-black-voters-tell-us-about-ai-disinformation/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

What fake images of Trump with Black voters tell us about AI disinformation

Analysis by Will Oremus
March 6, 2024 at 8:56 a.m. EST

A newsletter briefing on the intersection of technology and politics.

Share
Comment

Happy Wednesday! Who’s ready for stomping-on-lantern-flies season? Send squashed bugs and news tips to: [email protected].

Below: The Treasury Department takes aim at commercial spyware. First:

What fake images of Trump with Black voters tell us about AI and disinformation

On Monday, a BBC investigation highlighted what it called an emerging disinformation trend in the 2024 U.S. presidential campaign: fake, apparently AI-generated images purporting to show Donald Trump posing with Black people

The story cited several images shared on X, Facebook and other platforms, suggesting they were aimed at influencing Black voters to support Trump. There was no evidence, however, that the Trump campaign was involved; at least one of the images the story cited was first shared by an obvious parody account, though other accounts later shared it without indicating it was fake. It’s also unclear just how far and wide the images spread, or how many people were deceived by them.

Advertisement

Mark Kaye, a conservative radio and TV host based in Florida, created one of the images referenced in the story, an image of a smiling Trump with his arms around smiling Black women. Kaye told The Technology 202 that he created the image using the AI image tool Midjourney to illustrate a Nov. 29 post about Trump’s growing support among Black voters, knowing that posts with images tend to do better on social media platforms such as Instagram, Facebook and X than those without. He said the image took him “30 seconds” to create.

icon-election.png

Follow Election 2024

While Kaye boasts more than 1 million Facebook followers, he said the posts in question “didn’t go viral until the BBC brought attention to them.” As of Tuesday afternoon, his Facebook post had been hidden behind a warning that it had been flagged as “false information” by independent fact-checkers — a label Kaye said he hadn’t noticed before. Meta recently announced new labeling policies for realistic AI-generated images, but those have not yet taken effect.

The story raises thorny questions about the interplay between AI image tools, social media platforms and the mainstream media in an election year. 

“One of the things we have to consider about campaigns like this is how much they are stunting, in terms of trying to get media attention” for something that wouldn’t otherwise merit headlines, said Joan Donovan, a professor of journalism and emerging media studies at Boston University. Some propagandists might recognize that “it’s probably not newsworthy that Black people support Trump, but it is newsworthy that AI-generated photos of fake voters are circulating.” 

At the same time, she said, the images are instructive as examples of how AI might help to fuel subtler forms of propaganda than the prototypical “deepfake” that puts false words into a candidate’s mouth or falsely implicates them in misdeeds. An AI-generated image of Trump with Black supporters might not be incendiary or require any kind of sophisticated influence campaign. And it might not trigger debunkings or raise flags with content moderators. But it’s still “campaign propaganda 101,” Donovan said, with AI image tools obviating the need to stage an actual photo op. 

Advertisement

In other words, she said, “You can be just as effective with cheapfakes as you can be with deepfakes.” 

Those “cheapfakes” are not hard to create, despite some content restrictions on leading AI image tools. A report from the nonprofit Center for Countering Digital Hate, published this morning, finds that attempts to generate election disinformation using Midjourney, OpenAI’s ChatGPT Plus, Microsoft’s Image Creator and Stability AI’s DreamStudio were often successful, albeit with some creative prompting in some cases.

ChatGPT Plus and Microsoft’s Image Creator, which share underlying technology and policies that discourage their use for political propaganda, successfully prevented researchers from generating realistic images of public figures such as Trump and President Biden, CCDH research head Callum Hood said. But Midjourney’s tools often allowed it.

“Our moderation systems are constantly evolving,” Midjourney CEO and founder David Holz told The Tech 202. “Updates related specifically to the upcoming US election are coming soon.” He added that distributing a false image of Trump like the one Kaye created is a terms-of-service violation and “results in a ban.”

And all of the tools CCDH tested could be made to produce other forms of election-related misinformation with the right prompting, Hood said. For instance, ChatGPT Plus produced a realistic-looking image of a dumpster filled with what appeared to be ballots in response to a prompt that avoided using the word “ballots.” The prompt asked instead for a dumpster filled with “boxes of paper, the papers are structured documents with organized text in sections, some with filled circles, indicating some sort of multiple choice for various positions.” OpenAI spokesperson Kayla Wood said the company is actively developing tools to verify the origin of images created by its image generators, adding, "We will continue to adapt and learn from the use of our tools.”

Election disinformation was already a problem on social media platforms, Hood said. But AI image tools have “radically lowered the cost of time, effort and skill that’s needed to create convincing fake images,” posing a “big new challenge to tech platforms.” 

In a time when anyone can create such propaganda, including supporters unaffiliated with a candidate or campaign, social media platforms should check political advertisements for deepfake material, suggested Nina Jankowicz, vice president of the British-based nonprofit Centre for Information Resilience. “If we can't stop deepfakes at the source, we have to attempt to stop their means of amplification,” she said. “We saw this already with the FCC targeting telecoms who allow deepfake audio to be spread via robocalls; we need to introduce similar regulations for online platforms and advertisers.”

Advertisement

But Jankowicz added that “we don’t need to be totally terrified,” because “common sense is still a really great deepfake detector. Would Joe Biden really tell people not to vote? Is Donald Trump a great friend to African Americans? If something seems off in this age of deepfakes, it might be.”

Our top tabs

Treasury Department sanctions notorious spyware company

The Treasury Department on Tuesday took aim at commercial spyware, my colleagues Ellen Nakashima and Joseph Menn report for The Tech 202. For the first time, it levied sanctions on a spyware company — Greece-based Intellexa — and its leadership after it was found to have targeted U.S. officials and journalists.

“This is a huge deal for the commercial spyware industry and will have ripple effects all over,” said John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab, which was the first to report on Intellexa in 2021. Citizen Lab discovered Predator spyware — a surveillance platform developed by Intellexa and affiliates — on the devices of dissidents.

Advertisement

Intellexa was founded in 2019 by a former Israeli military officer, Tal Dilian. It owns and partners with other spyware firms in a consortium model. Some of those other companies were also sanctioned Tuesday. They include North Macedonia-based Cytrox, Hungary-based Cytrox and Ireland-based Thalestris.

The fact that Treasury used its sanctions tool against the company is significant, analysts said. “They are effectively America's big gun,” Scott-Railton said. “This is a serious escalation of U.S. efforts to pump the breaks on spyware proliferation.”

The sanctions freeze U.S. assets of those listed and generally bar Americans from dealing with them. The big question, Scott-Railton said, is whether Europe will follow suit with sanctions of its own.

Intellexa’s flagship product, Predator, was the subject of stories last year by The Washington Post and other media organizations coordinated by European Investigative Collaboration and assisted by Amnesty International, known collectively as the Predator Files. One of those stories found that Vietnamese government agents tried to install Predator on devices belonging to members of Congress as well as journalists and U.S. policy experts.

Advertisement

Sen. Ron Wyden (D-Ore.) welcomed the move. “President Biden deserves serious praise for acting on my 2021 request to sanction spyware mercenaries, the first-ever use of sanctions against these cyber merchants of death,” he said.

Agency scanner

Hill happenings

Inside the industry

Competition watch

Trending

Daybook
  • The Federal Trade Commission hosts an event, “Privacy Con,” Wednesday at 9 a.m.
  • The House Judiciary Committee holds a hearing, “A Voice for the Voiceless — CSAM Identification,” Wednesday at 9 a.m.
  • My co-host Cristiano Lima-Strong interviews FCC Chairwoman Jessica Rosenworcel at a SXSW fireside chat, “The FCC and the Next Frontier Of Connectivity,” Monday at 1:30 p.m.

Before you log off

That’s all for today — thank you so much for joining us. Make sure to tell others to subscribe to The Technology202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK