0

Lawmakers are wrestling with how to regulate deepfakes

 6 months ago
source link: https://www.washingtonpost.com/politics/2024/03/13/lawmakers-are-wrestling-with-how-regulate-deepfakes/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Close

Lawmakers are wrestling with how to regulate deepfakes

89ebdd6c-a27a-40b6-9b15-1605fd2e493e.png&w=196&h=196
Analysis by Will Oremus
March 13, 2024 at 9:05 a.m. EDT
The Technology 202

A newsletter briefing on the intersection of technology and politics.

Share
Comment

Happy TikTok ban vote day! We’re covering this morning’s House vote with a live blog, so tune in if you’re interested. Send news tips to: [email protected].

Lawmakers are wrestling with how to regulate deepfakes

When it comes to AI “deepfakes” — deceptive videos made with the help of artificial intelligence tools — political fakes often get the bulk of lawmakers' attention. But at a House subcommittee hearing on Tuesday, the use of AI tools to generate nonconsensual nude images and child sex abuse material took center stage. 

At a hearing whose star witness was Dorota Mani, the mother of a New Jersey teen allegedly victimized by schoolmates’ circulation of faked nude images, there was no debate that deepfake porn and nudes are a problem — one that disproportionately affects women and children and can ruin their lives. In his opening remarks, Rep. Gerry Connolly (D-Va.) cited a recent report that claimed 98 percent of all online deepfake videos were pornographic and that women or girls were the subject in 99 percent of them. 

Advertisement

The debate now is what Congress should do about it — and what it can do without running afoul of the First Amendment.

icon-election.png

Follow Election 2024

In one corner of the debate was Carl Szabo, vice president and general counsel of NetChoice, a tech industry group funded by Google and Meta, among others. Testifying at the hearing, Szabo argued that most of the harms posed by AI deepfakes can and should be addressed through existing laws rather than new ones.

“Every law that applies offline applies online,” he said. “So when it comes to harassment, we need to enforce harassment law. When it comes to fraud, we need to enforce fraud law.”

Szabo said there are some gaps in existing laws when it comes to deepfakes, however, that could be addressed with narrowly tailored legislation. For instance, he said NetChoice supports an amendment to clarify that artificially generated child pornography violates the same laws that apply to other forms of child sex abuse material. But he said he's wary of government “overreach,” adding: “The last thing we want is to make a law that doesn't hold up in court.”

Advertisement

His testimony before the House Oversight cybersecurity subcommittee signaled that the tech industry will fight legislation that seeks to hold tech firms responsible for AI-related harms. It will push instead to put the onus on individual “bad actors” who abuse the technology.

The problem is that “current laws aren’t working,” countered Ari Ezra Waldman, a law professor at the University of California at Irvine. While it has always been possible to manipulate images, he said AI presents a “proliferation problem,” because it has dramatically lowered the barriers to both creating and sharing those images. 

He argued that both civil and criminal penalties are needed to deter deepfake porn, and that platforms should bear some of the responsibility. Suing the individuals who create and publish the material isn’t enough to make victims whole, he said, because “so many of the perpetrators of this are the dude in the basement who's probably judgment-proof.”

Advertisement

Relying on AI companies to police abuse of their own software is another dead end, argued John Shehan, senior vice president of the National Center for Missing and Exploited Children. 

As reports of AI-generated child exploitation material roll in to NCMEC's CyberTipline, the majority are not coming from the AI companies themselves, he said. In fact, many of the companies behind prominent AI image tools haven't even registered to make such reports. 

Then there is the problem of distribution. Connolly pointed to the Taylor Swift deepfake nudes that circulated widely on X in January, saying the fact that it happened to a star with as much power as Swift “emphasizes that no one is safe.” And my colleague Drew Harwell reported last month that deepfakes of online influencer Bobbi Althoff, which first surfaced on porn sites, quickly went viral when they were posted to X.

Advertisement

Social media companies such as X have been able to lean on Section 230, an immunity shield for websites that host user-generated content, for protection against lawsuits stemming from harmful material their users post. There's an emerging consensus that Section 230 may not protect tech firms from lawsuits arising from material generated by their own AI tools, but plaintiffs still need a cause of action to win such suits.

There are House bills in the works to address the issue, though they face a long road to passage.

One that has Mani’s backing, and some bipartisan support, comes from Rep. Joseph Morelle (D-N.Y.). His bill would criminalize the sharing of nonconsensual, digitally altered images and create a private right of action for victims of it. And the subcommittee’s chair, Rep. Nancy Mace (R-S.C.), introduced a deepfake pornography bill of her own earlier this month, which so far has three Republican co-sponsors. 

Advertisement

Waldman told the subcommittee Tuesday that the First Amendment shouldn't stand in the way of deepfake laws, which he compared to existing laws prohibiting counterfeiting, impersonation and forgery, as long as they carve out exceptions for satire and other protected speech.

Ari Cohn, free speech counsel at the digital rights group TechFreedom, told The Tech 202 he isn't so sure. While Congress may be able to show a compelling interest in regulating certain types of deepfakes, he said, “not every deepfake is inherently defamatory or fraudulent,” and “any law treating all deepfakes the same would be constitutionally suspect at best.”

Hill happenings
Advertisement

Inside the industry

Trending

Daybook

Before you log off

That’s all for today — thank you so much for joining us. Make sure to tell others to subscribe to The Technology202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK