1

Ask Slashdot: Could a Form of Watermarking Prevent AI Deep Faking? - Slashdot

 8 months ago
source link: https://slashdot.org/story/24/01/14/0145246/ask-slashdot-could-a-form-of-watermarking-prevent-ai-deep-faking
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ask Slashdot: Could a Form of Watermarking Prevent AI Deep Faking? (msn.com) 36

Posted by EditorDavid

on Sunday January 14, 2024 @06:09PM from the AI-authenticating dept.

An opinion piece in the Los Angeles Times imagines a world after "the largest coordinated deepfake attack in history... a steady flow of new deepfakes, mostly manufactured in Russia, North Korea, China and Iran." The breakthrough actually came in early 2026 from a working group of digital journalists from U.S. and international news organizations. Their goal was to find a way to keep deepfakes out of news reports... Journalism organizations formed the FAC Alliance — "Fact Authenticated Content" — based on a simple insight: There was already far too much AI fakery loose in the world to try to enforce a watermarking system for dis- and misinformation. And even the strictest labeling rules would simply be ignored by bad actors. But it would be possible to watermark pieces of content that deepfakes.

And so was born the voluntary FACStamp on May 1, 2026...

The newest phones, tablets, cameras, recorders and desktop computers all include software that automatically inserts the FACStamp code into every piece of visual or audio content as it's captured, before any AI modification can be applied. This proves that the image, sound or video was not generated by AI. You can also download the FAC app, which does the same for older equipment... [T]o retain the FACStamp, your computer must be connected to the non-profit FAC Verification Center. The center's computers detect if the editing is minor — such as cropping or even cosmetic face-tuning — and the stamp remains. Any larger manipulation, from swapping faces to faking backgrounds, and the FACStamp vanishes.

It turned out that plenty of people could use the FACStamp. Internet retailers embraced FACStamps for videos and images of their products. Individuals soon followed, using FACStamps to sell goods online — when potential buyers are judging a used pickup truck or secondhand sofa, it's reassuring to know that the image wasn't spun out or scrubbed up by AI.

The article envisions the world of 2028, with the authentication stamp appearing on everything from social media posts to dating app profiles: Even the AI industry supports the use of FACStamps. During training runs on the internet, if an AI program absorbs excessive amounts of AI-generated rather than authentic data, it may undergo "model collapse" and become wildly inaccurate. So the FACStamp helps AI companies train their models solely on reality. A bipartisan group of senators and House members plans to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, shopping sites and investment and real estate offerings. Counterfeiting a FACStamp would become a criminal offense. Polling indicates widespread public support for the act, and the FAC Alliance has already begun a branding campaign.

But all this leaves Slashdot reader Bruce66423 with a question. "Is it really technically possible to achieve such a clear distinction, or would, in practice, AI be able to replicate the necessary authentication?"


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK