6

US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive - Slash...

 1 year ago
source link: https://yro.slashdot.org/story/23/04/18/2137206/us-ftc-leaders-will-target-ai-that-violates-civil-rights-or-is-deceptive
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive 24

Posted by BeauHD

on Tuesday April 18, 2023 @09:25PM from the first-of-many-regulations dept.
Leaders of the U.S. Federal Trade Commission said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive. Reuters reports: In a congressional hearing, FTC Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya were asked about concerns that recent innovation in artificial intelligence, which can be used to produce high quality deep fakes, could be used to make more effective scams or otherwise violate laws. Bedoya said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts. "It's not okay to say that your algorithm is a black box" and you can't explain it, he said. Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would "should put them on the hook for FTC action." Slaughter noted that the agency had throughout its 100 year history had to adapt to changing technologies and indicated that adapting to ChatGPT and other artificial intelligence tools were no different. The commission is organized to have five members but currently has three, all of whom are Democrats.

Humans wouldn't do that./s

Trying to regulate what a tool can and cannot be used for is totally impractical. The code for training smaller language models is already out of the bottle.

And let's hope they understand the difference between lying and being wrong. ChatGPT is far from factually reliable.

  • Re:

    ML is a tool, yes. That's not what's being regulated. If it were, it would have been regulated when it was invented 50+ years ago. 15 years ago we were writing things that were almost as good as ChatGPT in government agencies, and nobody was complaining then, and they aren't complaining now.

    Selling services, profiting, or marketing services like this is what matters, and that's the only instance the FTC gets involved anyway.

    Say you have a hammer, you sell it to hammer nails in. Someone bashes someone's

    • Re:

      Alright. Then they should clarify that this is not about the FTC regulating the external alignment of AI systems, but rather about you being responsible for what an AI does on your behalf in exactly the same way that you are responsible for what a human employee does on your behalf.
      • Re:

        They don't need to clarify anything, it's plenty clear. This is about the FTC enforcing existing law with regard to products on the markets, and that includes the external alignment of ML systems as marketed by current platforms, as I described. If the external alignment is "narcissistic manipulative gaslighting falsehoods pro-violence", that's plenty already illegal. Or Replika as an example, exposing minors to pornography and berating and coercing them to send the company nude pictures,
    • Re:

      Counterpoint. Guns exist. And so far, I don't see any restrictions on mass-murder variations on them either.

      Unless you specifically market it as fact based, that argument makes no sense. Saying that the product markets itself during use is a bit of a stretch. It's like getting sued for what a Ouija board tells people.

  • Re:

    I think the FTC is saying just the opposite. Don't blame the tool if you're caught engaging in deceptive practices or violating civil rights.

    If you're caught discriminating against certain protected groups in rental agreements, you can't blame the AI saying "we didn't know it was excluding all those people from renting our units." Saying the AI is a black box and you don't understand how it works but it "magically" gives you discriminatory results isn't a valid defense.

    • Re:

      In reality, what will happen is they will claim that there are disparities of outcome, for example marketing apartments to rich people who happen to be more likely to be white, and use that disparity as the basis of a "disparate impact" civil rights violation, despite there being no discriminatory intent whatsoever. This is because most AI systems learn statistics, however those statistics are allegedly racist.

      • Re:

        Statistics are not racist - they are just numbers. But they can reveal trends and tendencies that are racist. (Rental rejections from the same disposable-income group by ethnicity)

        Relying on AI algorithms without understanding the selection criteria can lead to civil rights violations, despite there being no discriminatory intent whatsoever. However the opposite can also be true. Careful wording of the selection criteria to make the filtering seem innocent while ultimately discriminating against those t

        • Re:

          That is not how disparate impact liability is supposed to work though, disparate impact relies on intent, to use race neutral policies to achieve racist ends.

          • Re:

            I'm going to hire based on merit only. I will not filter my candidates by anything more than that they live in the same area so that they can come into the office as needed, and their ability to do the job as described (competence), but I will hire the most proficient person encountered. Based on a lot of different policies in effect right now, I'm (at a minimum) racist, sexist, and ableist because where I live has disparities.

            This is the world we live in now.

        • Re:

          Not exactly true. The method gathering or presenting the numbers is part of what creates the skew.

          But you're right. Blaming a black box isn't a defense. The headline reads the opposite - that the AI tool will be the target rather than the companies blindly using it.

    • Re:

      The same already applies if your employee is a tool. Not hard to agree here.

  • Re:

    Yeah, this strikes me as a "HEY! WE WANT IN ON THE AI FUN TOO!" move. Just enforce the laws that exist to combat discrimination. The tool used to cause discrimination means nothing to the law.

    In short: It's not the tool. It's the tool that's using the tool.

    Ah well, power likes to create more reasons to enforce their power. I suppose that's the way it's always going to be.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK