4

Google Pauses AI Image-generation of People After Diversity Backlash - Slashdot

 6 months ago
source link: https://tech.slashdot.org/story/24/02/22/1150243/google-pauses-ai-image-generation-of-people-after-diversity-backlash
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Google Pauses AI Image-generation of People After Diversity Backlash

Sign up for the Slashdot newsletter! OR check out the new Slashdot job board to browse remote jobs or jobs in your areaDo you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 20 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

Google has temporarily stopped its latest AI model, Gemini, from generating images of people (non-paywalled link) , as a backlash erupted over the model's depiction of people from diverse backgrounds. From a report: Gemini creates realistic images based on users' descriptions in a similar manner to OpenAI's ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs. However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historically inaccurate contexts, for instance in depictions of Viking kings. Google said in a statement: "We're working to improve these kinds of depictions immediately. Gemini's image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." It added that it would "pause the image-generation of people and will re-release an improved version soon."

Google has temporarily stopped its latest AI model, Gemini,......like from an old song, What is it good for, nothing !

by Baron_Yam ( 643147 ) on Thursday February 22, 2024 @07:02AM (#64259394)

> it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs.

First, what the hell is a 'dangerous' image prompt?

Second, there are rare but perfectly valid reasons for 'hateful' images. I get that there isn't a good way to avoid major problems with this one and blanket blocking is probably the only practical solution, but that's regrettable.

Third... 'introduce diversity'. WTF? So you take my prompt and then deliberately ignore part of it for a racial political agenda? If I ask for an image of a crowd in a specific location, or one that is gathered for a specific purpose, that crowd ought to have an appropriate racial mix for the location and/or purpose.

  • Re:

    The algorithm can only reproduce using the data it was trained upon, so here we are. The real problem here is that it proves that the agenda to promote diversity at the expense of everyone else is working very well.
    • Re:

      To be specific:

      If you do nothing and pay attention to nothing, models tend to stereotype. If most pictures of, say, "Scientist" are male, then it'll tend to draw scientists as male (often even moreso than the training dataset, which itself tends to be biased relative to general population ratios). So curators often put effort into ensuring diversity in model datasets in order to get them to better represent the real world. Stable Diffusion XL imho generally struck a nice balance as an example - in my expe

  • Re:

    Involuntary pornography, child pornography, images that re-enforce dangerous racial stereotypes, fake news...

    Remember that Google isn't here to enable racists and paedophiles to express themselves in accordance with free speech rights, it's providing a service and will be held accountable for the output it provides.

  • Re:

    Whatever the creators of the AI have declared to be so.

    There is no text or image objectively harmless or dangerous. It's all a definition. Since we live in a world of trigger warnings where words can be classified as violence, it's understandable that big companies err on the side of caution. Imagine they hadn't. The headline would be something about racism and would definitely not be nuanced.

    Yes. Because a few tech or history nerds will point out fairly low-key that in 1000 AD there would be maybe a dozen

  • Re:

    I presumed that the intent was to provide diverse results when the prompt does not specify. Like "make a picture of some people playing soccer" doesn't specify race, gender, or location, so they figured the thing would be to be diverse by default. A tendency toward a "default", whatever that default may be would piss *someone* off when noticed.

    • Re:

      In this case, people are outraged that the "default" isn't all white like their concept of the default. But it's totally the 'woke' that are racists, lol.
  • I want to agree with you harder but I cannot. You said everything that is on my (and a reasonable person's) mind already. I have nothing else to contribute to the discussion than this reply written in agreement.

  • Dangerous for Google, lawsuit wise!

  • Machines used to do calculations and do exactly as you ask. Now they're loaded with bias and context so that they intepret what you ask them to do. So, the intent is to make them as more productive humans, who can be guided, manipulated and told to obey/enforce the company's policy, and as such, they're perfect candidates for replacing real humans, at least for "digital"/online-friendly/communications work.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK