US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive - Slash...
source link: https://yro.slashdot.org/story/23/04/18/2137206/us-ftc-leaders-will-target-ai-that-violates-civil-rights-or-is-deceptive
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive
US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive 24
Posted by BeauHD
on Tuesday April 18, 2023 @09:25PM from the first-of-many-regulations dept.›
Humans wouldn't do that./s
Trying to regulate what a tool can and cannot be used for is totally impractical. The code for training smaller language models is already out of the bottle.
And let's hope they understand the difference between lying and being wrong. ChatGPT is far from factually reliable.
-
ML is a tool, yes. That's not what's being regulated. If it were, it would have been regulated when it was invented 50+ years ago. 15 years ago we were writing things that were almost as good as ChatGPT in government agencies, and nobody was complaining then, and they aren't complaining now.
Selling services, profiting, or marketing services like this is what matters, and that's the only instance the FTC gets involved anyway.
Say you have a hammer, you sell it to hammer nails in. Someone bashes someone's
-
Alright. Then they should clarify that this is not about the FTC regulating the external alignment of AI systems, but rather about you being responsible for what an AI does on your behalf in exactly the same way that you are responsible for what a human employee does on your behalf.
-
They don't need to clarify anything, it's plenty clear. This is about the FTC enforcing existing law with regard to products on the markets, and that includes the external alignment of ML systems as marketed by current platforms, as I described. If the external alignment is "narcissistic manipulative gaslighting falsehoods pro-violence", that's plenty already illegal. Or Replika as an example, exposing minors to pornography and berating and coercing them to send the company nude pictures,
-
-
Counterpoint. Guns exist. And so far, I don't see any restrictions on mass-murder variations on them either.
Unless you specifically market it as fact based, that argument makes no sense. Saying that the product markets itself during use is a bit of a stretch. It's like getting sued for what a Ouija board tells people.
-
-
I think the FTC is saying just the opposite. Don't blame the tool if you're caught engaging in deceptive practices or violating civil rights.
If you're caught discriminating against certain protected groups in rental agreements, you can't blame the AI saying "we didn't know it was excluding all those people from renting our units." Saying the AI is a black box and you don't understand how it works but it "magically" gives you discriminatory results isn't a valid defense.
-
In reality, what will happen is they will claim that there are disparities of outcome, for example marketing apartments to rich people who happen to be more likely to be white, and use that disparity as the basis of a "disparate impact" civil rights violation, despite there being no discriminatory intent whatsoever. This is because most AI systems learn statistics, however those statistics are allegedly racist.
-
Statistics are not racist - they are just numbers. But they can reveal trends and tendencies that are racist. (Rental rejections from the same disposable-income group by ethnicity)
Relying on AI algorithms without understanding the selection criteria can lead to civil rights violations, despite there being no discriminatory intent whatsoever. However the opposite can also be true. Careful wording of the selection criteria to make the filtering seem innocent while ultimately discriminating against those t
-
That is not how disparate impact liability is supposed to work though, disparate impact relies on intent, to use race neutral policies to achieve racist ends.
-
I'm going to hire based on merit only. I will not filter my candidates by anything more than that they live in the same area so that they can come into the office as needed, and their ability to do the job as described (competence), but I will hire the most proficient person encountered. Based on a lot of different policies in effect right now, I'm (at a minimum) racist, sexist, and ableist because where I live has disparities.
This is the world we live in now.
-
-
Not exactly true. The method gathering or presenting the numbers is part of what creates the skew.
But you're right. Blaming a black box isn't a defense. The headline reads the opposite - that the AI tool will be the target rather than the companies blindly using it.
-
-
-
The same already applies if your employee is a tool. Not hard to agree here.
-
-
Yeah, this strikes me as a "HEY! WE WANT IN ON THE AI FUN TOO!" move. Just enforce the laws that exist to combat discrimination. The tool used to cause discrimination means nothing to the law.
In short: It's not the tool. It's the tool that's using the tool.
Ah well, power likes to create more reasons to enforce their power. I suppose that's the way it's always going to be.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK