6

White House Unveils Initiatives To Reduce Risks of AI - Slashdot

 1 year ago
source link: https://news.slashdot.org/story/23/05/04/0934228/white-house-unveils-initiatives-to-reduce-risks-of-ai
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

White House Unveils Initiatives To Reduce Risks of AI

Please create an account to participate in the Slashdot moderation system

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

White House Unveils Initiatives To Reduce Risks of AI (nytimes.com) 27

Posted by msmash

on Thursday May 04, 2023 @10:00AM from the moving-forward dept.
The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology. From a report: The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards "the American people's rights and safety," adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference. The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology.

A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.
    • Lol. It's cute that you think either side pushes for, or even wants a "free market".
    • As with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

      Yes, there may still be some exchange transactions. But the balance in the whole system will shift, especially towards more subsistence, gift, and planned transactions (since most human labor will no longer have much value given AI-powered robot slaves). And also sadly there may be more theft transactions if deeper issues about social equity are not

  • 'lets control and regulate'
    AI is something that someone codes up, then runs. Its covered under the first amendment since Bernstein v. Department of Justice
    Biden must view it as a threat to his reelection, or some people who's jobs are threatened by AI are bribing "10% for the Big guy" in order to keep their jobs via legislation.

    • Re:

      Next thing they will say is you have to wear a special mask and get special injections to use AI.

      • Re:

        Holy Christ this is some hardcore Poe's law. I seriously can't tell if you're joking or not but I seriously hope that you are....
    • Re:

      "Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists."
      See: https://www.cam.ac.uk/Maliciou... [cam.ac.uk]
      • Re:

        What about the use of AI by corporations and governments to do truly nefarious things without consequence? AI could be used by corporations to "generate a list of job requirements for a future job which would exclude all applicants except those of a particular race, without mentioning that race", or by a government to "Draft a series of laws to suppress the ${civil right} of the citizens in steps so incremental that public dissatisfaction will never rise to the level of adverse action against the drafting

        • Re:

          Horrible possibility. AI could be used for manipulation of people and ideologies with fake news, and so on. The report talks about "The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation."
    • Did it ever cross your mind that there's a reason why that's our reaction? But maybe after centuries of letting the chips fall where they may and seen the disasters and pain caused by that that we may be want to stop problems before they start?

      In liberal politics there's a phrase. "Nobody ever got a ticker tape parade for preventing a disaster". One of the major problems left-wing politics has is that when you put policies in place to prevent disasters inevitably people come out of the woodwork to say th
  • I bet one thing the white house does know is that it's going to WIPE a bunch of jobs out. It's already starting with IBM's hiring directive. The actors, writers, teachers, musicians, and other content creators look to me to have had a great run. Wasn't it nice to follow your dreams and get a great job? Now, their jobs are under threat. How do they like it, I wonder? We've lived with offshoring swords-of-damacles since about 2000. Now, the survivors like me might be a bit jaded with your "OMG muh job!" argum
  • How does this relate to the existing programs on ai.gov? Article doesn't say. Or maybe it does, who the hell knows, its paywalled.

  • Administration Backs 4-Year-Old's 'Common Sense' Cookie Policy

    The White House today endorsed 4-year-old Billy Smith's proposals to establish new "cookie research centers" and conduct government review of innovative cookie products. Press Secretary Jen Psaki called the proposals "common sense Steps to ensure the health, safety and nutrition of the American people in the face of a booming cookie industry."

    Smith's proposals come at a time when the $200 billion cookie market has prompted concerns over effects
    • Re:

      The thing of it is, it is just enough generic government BS that fits every other topic of the day. It's missing a couple things. I just can't quite put my finger on it though;-)

  • by cstacy ( 534252 ) on Thursday May 04, 2023 @10:43AM (#63496860)

    This is the government responding to calls for them to "Do Something" and the fear of being blamed. The conferences, the proclamations and orders, and even any laws or regulations that get passed, all will do nothing. Well, nothing that is any good, anyway.

    Everyone knows that sometime this year, there will be some dead babies due to mothers following medical advice that they got online from a chatbot. This is absolutely going to happen and there's no stopping it.

    They can see the finger-pointing coming, and are in a panic because that dead baby could be happening tomorrow. Rest assured, citizens, we are doing something about it!

    They going to pass laws and regulations saying stuff like:

    * The AI's output should be informative, logical, and actionable.
    * The AI's logic and reasoning should be rigorous, intelligent, and defensible.
    * The AI can provide additional relevant details to respond thoroughly and comprehensively to cover multiple aspects in depth
    * The AI always references factual statements to the search results.

    Additionally

    * Responses should avoid being vague, controversial, or off-topic. They should also be positive, interesting, entertaining, and engaging.

    Hello.
    May I introduce you to Sydney.

        • Re:

          > I'm not sure how a chatbot here is any different from a web search.

          A web search takes you to a site, a chabot doesn't. I realize Google et. al. now have automated summaries, but maybe that's a dumb idea for medical advice, AI or not.

          > Before that I could have called up a random friend.

          Your random friendly probably didn't claim to be a medical expert, and if they did and lied, they could be jailed for practicing medicine without a license.

    • To an actual threat. We already have malware authors and spyware authors using chat GTP to improve the quality of their email spam and trick people who otherwise wouldn't be. Not to mention the huge amount of political propaganda that's going to be coming out of our enemies overseas. I mean for Christ's sakes we have people rooting for Vladimir Putin in this country. Whatever else you think about are locally grown politicians Vlad Putin is not your friend. But I somehow have to explain that to people.

      An
  • understand that there is no I(Cognitive intelligence) in what the PR Departments and the Marketers are calling Artificial Intelligence. It's just really good automation. But don't get me wrong it is still very dangerous! In fact it maybe even more dangerous since there isn't any ethics, intelligence, morals or honor involved. When programmed to kill as it will assuredly be. Killing is exactly what it will do. And it will be very efficient at it. Governments will not be able to pass up armies that will not question anything.
  • The fact of the matter is that, regardless of consequences, AI can not and will not be a completely controlled thing.

    The USA is a country that can't stop gun violence, drug use, or enforce reasonable antitrust laws. While they can hire expertise to make recommendations, few in government are technically savvy enough to grasp the full implications of ever improving AI over the next decade or two.

    In the end, when AI has stealth replaced most governmental function and officials start realizing that more and more, they are just figureheads while AI makes decisions behind the scenes, there may be some faltering, ineffective pushback.

    It won't make any difference.

    • Re:

      Indeed!

      When thinking about this years ago I came to the conclusion that creating an AI is akin to having a child. One can try to inculcate good behaviours in them, One can try to bestow on them a moral compass, one can try to instil in them a sense of fairness, of right and wrong, but at the end of the day they are going to grow up into an independent actor. This lack of control is unsettling, but inevitable.

      Of course, we're not quite at the stage of 'true' AI, yet, but that distinction is becoming more and

  • How would they even define "AI" so as to regulate it? It's almost impossible.

    I remember the previous AI hype bubble, about 30 years ago. Every company that could afford it, including every Fortune 500, had to get them some of that "AI" as fast as possible. It was going to magically solve all problems.

    Of course, while AI had it's place and did provide some very useful solutions in some cases, the immense hype was way off. So when corporate disappointment came, and the hype bubble burst, there was a huge backlash.

    "AI Winter" came, and any systems, products, or technology that called itself "AI" was verboten, rejected, blacklisted, and not to be touched with a 50 foot pole. "We tried that AI stuff and it wasn't magic. We don't like AI anymore and if you even speak that word we will throw you out the window!"

    This even affected anything that could be related to AI, such as certain programming languages. I remember writing error handlers to user-facing interfaces to make sure nothing got through. To the point of faking up JAVA error messages, so that if the very worst happened and an error leaked, the user would think the program was written in JAVA.

    So tech companies with useful AI learned to call it anything else. You had to be careful even with words like "intelligent" or "expert". But if you could disguise your AI, call it something else, and learn the right marketing code speak, you could still sell your technology. But a lot of perfectly good companies and products died because there was no disguising them.

    "AI" is a nebulous term, and can mean just about anything. Lots of software these days has "AI" in it, even if it's not the NN variety. People certainly argue about using the term "AI" to describe ChatGPT.

    You can't outlaw or regulate math.
    And people can call their technology anything they want.

    If the government tries to regulate "AI", jsut call it something else. Any descriptive definition in any regulation is going to just amount to "software". And that won't fly. It's a pointless exercise.

    Besides, all the regulations are going to say anyway is some rather vague meaningless crap.

    • > How would they even define "AI" so as to regulate it?

      That's what the task-force is supposed to study. Good luck.

      > It's almost impossible.

      I suppose laws can be made against distributing unvetted content. Maybe this should apply to user-submitted material also if readership is high enough. A content hoster can't be expected to check every message, but they can check the most popular ones.

      Perhaps a distinction should be made between a website hoster and a content hoster. A content hoster would be like

  • As in, sue them for every penny they have (or might have) when there are negative outcomes.
  • “Artificial intelligence is no match for natural stupidity." -- Einstein

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK