Google will label fake images created with its A.I.
source link: https://www.cnbc.com/2023/05/10/google-will-label-fake-images-created-with-its-ai-.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Google will label fake images created with its A.I.
- Google will embed information called a markup inside images created by its AI models that will warn people it was originally created by a computer, the company said Wednesday.
- “Image self-labeled as AI generated,” reads one example warning provided by Google.
- The move is the most significant effort by a big technology company so far to label and classify output from so-called generative AI.
In this article
Google will embed information called a markup inside images created by its AI models to warn people the images were originally created by a computer, it said on Wednesday.
The data inside the images won’t be visible to the human eye, but software such as Google Search will be able to read it and then display a label warning users. Google will also provide additional information about all images in its results to help prevent deception, including when the image was first uploaded to the search engine and whether it’s been cited by news sites.
“Image self-labeled as AI generated,” reads one example warning provided by Google.
The move is the most significant effort by a big technology company so far to label and classify output from so-called generative AI. Officials and technology workers have warned the technology’s capabilities to create realistic images or fluent passages of text could help spammers, scammers and propagandists fool people.
For example, a recent generated image of Pope Francis in a stylish winter jacket generated on the Midjourney app went viral and fooled some people into thinking it was real.
One issue facing the AI industry is there is no reliable way to determine generated images. While there are often some clues, such as badly drawn hands, there isn’t a definitive way to say which images were made by a computer and which were drawn or photographed by a human.
Google’s approach is to label the images when they come out of the AI system, instead of trying to determine whether they’re real later on. Google said Shutterstock and Midjourney would support this new markup approach. Google developer documentation says the markup will be able to categorize images as trained algorithmic media, which was made by an AI model; a composite image that was partially made with an AI model; or algorithmic media, which was created by a computer but isn’t based on training data.
Google held its annual developers conference Wednesday, where it announced a $1,799 folding phone and additional AI features for other Google products, including an image generator.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK