4

Understanding AI's limits helps fight dangerous myths - The Washington Post

 1 year ago
source link: https://www.washingtonpost.com/technology/2023/03/22/ai-red-flags-misinformation/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

3 things everyone’s getting wrong about AI

As ChatGPT and other AI tools spread, people are struggling to separate fact from fiction

Updated March 30, 2023 at 7:27 a.m. EDT|Published March 22, 2023 at 7:28 a.m. EDT
A chatbot version of Frankenstein
(Illustration by Elena Lacey/The Washington Post)
Listen
Comment
Gift Article
Share

From chess engines to Google translate, artificial intelligence has existed in some form since the mid-20th century. But these days, the technology is developing faster than most people can make sense of it. That leaves regular people vulnerable to misleading claims about what AI tools can do and who’s responsible for their impact.

With the arrival of ChatGPT, an advanced chatbot from developer OpenAI, people started interacting directly with large language models, a type of AI system most often used to power auto-reply in email, improve search results or moderate content on social media. Chatbots let people ask questions or prompt the system to write everything from poems to programs. As image-generation engines such as Dall-E also gain popularity,businesses are scrambling to add AI tools and teachers are fretting over how to detect AI-written assignments.

The flood of new information and conjecture around AI raises a variety of risks. Companies may overstate what their AI models can do and be used for. Proponents may push science-fiction storylines that draw attention away from moreimmediate threats. And the models themselves may regurgitate incorrect information. Basic knowledge of how the models work — as well as common myths about AI — will be necessary for navigating the era ahead.

Advertisement

“We have to get smarter about what this technology can and cannot do, because we live in adversarial times where information, unfortunately, is being weaponized,” said Claire Wardle, co-director of the Information Futures Lab at Brown University, which studies misinformation and its spread.

In an open letter published Tuesday and signed by Elon Musk, former Democratic presidential candidate Andrew Yang and “The Social Dilemma’s” Tristan Harris, more than 1,000 signatories called for a halt to further development of “giant AI experiments” such as the large language model GPT-4.

The letter cites risks to society and humanity posed by unrestrained development of impressive AI systems. It also referred to those systems as “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”

Advertisement

The letter is sparking pushback as some prominent AI researchers accused the authors of misrepresenting AI’s capabilities and risks.

“AI literacy is starting to become a whole new realm of news literacy,” said Darragh Worland, who hosts the podcast “Is That a Fact?” from the News Literacy Project, which helps people navigate confusing and conflicting claims they encounter online.

There are plenty of ways to misrepresent AI,but some red flags pop up repeatedly. Here are some common traps to avoid, according toAI and information literacy experts.

Don’t project human qualities

It’s easy to project human qualities onto nonhumans. (I bought my cat a holiday stocking so he wouldn’t feel left out.)

That tendency, called anthropomorphism, causes problems in discussions about AI, said Margaret Mitchell, a machine learning researcher and chief ethics scientist at AI company Hugging Face, and it’s been going on for a while.

Advertisement

In 1966, an MIT computer scientist named Joseph Weizenbaum developed a chatbot named Eliza, which responded to users’ messages by following a script or rephrasing their questions. Weizenbaum found that people ascribed emotions and intent to Eliza even when they knew how the model worked.

Play now
21 min
Follow on
imrs.php?src=https%3A%2F%2Fpodcast.posttv.com%2Fpodcast%2F20230315%2Ft_1678916102205_name_postreports_mainart.jpg&w=960

Podcast episode

Did the AI behind ChatGPT just get smarter?

GPT-4 is the latest AI tool from the company OpenAI. Despite being able to analyze images, it does have quite a few limitations.

As more chatbots simulate friends, therapists, lovers and assistants, debates about when a brain-like computer network becomes “conscious” will distract from pressing problems, Mitchell said. Companies could dodge responsibility for problematic AI by suggesting the system went rogue. People could develop unhealthy relationships with systems that mimic humans. Organizations could allow an AI system dangerous leeway to make mistakes if they view it as just another “member of the workforce,” said Yacine Jernite, machine learning and society lead at Hugging Face.

Humanizing AI systems also stokes our fears, and scared people are more vulnerable to believe and spreadwronginformation, said Wardle of Brown University. Thanks to science-fiction authors, our brains are brimming with worst-case scenarios, she noted. Stories such as “Blade Runner”or “The Terminator”presenta future where AI systems become conscious and turn on their human creators. Since many people are more familiar with sci-fi movies than the nuances of machine-learning systems, we tend to let our imaginations fill in the blanks. By noticing anthropomorphism when it happens, Wardle said, we can guard against AI myths.

Don’t view AI as a monolith

AI isn’t one big thing — it’s a collection of different technologies developed by researchers, companies and online communities. Sweeping statements about AI tend to gloss over important questions, Jernite said. Which AI model are we talking about? Who built it? Who’s reaping the benefits and who’s paying the costs?

Advertisement

AI systems can do only what their creators allow, Jernite said, so it’s important to hold companies accountable for how their models function. For example, companies will have different rules, priorities and values that affect how their products operate in the real world. AI doesn’t guide missiles or create biased hiring processes. Companies do those things with the help of AI tools, Jernite and Mitchell said.

“Some companies have a stake in presenting [AI models] as these magical beings or magical systems that do things you can’t even explain,” Jernite said. “They lean into that to encourage less careful testing of this stuff.”

For people at home, that means raising an eyebrow when it’s unclear where a system’s information is coming from or how the system formulated its answer.

Meanwhile, efforts to regulate AI are underway. As of April 2022,about one-third of U.S. states had proposed or enacted at least one law to protect consumers from AI-related harm or overreach.

Be skeptical of AI tools

If a human strings together a coherent sentence, we’re usually not impressed. But if a chatbot does it, our confidence in the bot’s capabilities may skyrocket.

Advertisement

That’s called automation bias, and it often leads us to put too much trust in AI systems, Mitchell said. We may do something the system suggests even if it’s wrong or fail to do something because the system didn’t recommend it. For instance, a 1999 study found that doctors using an AI system to help diagnose patients would ignore their correct assessments in favor of the system’s wrong suggestions 6 percent of the time.

In short: Just because an AI model can do something doesn’t mean it can do it consistently and correctly.

As tempting as it is to rely on a single source, such as a search-engine bot that serves up digestible answers, these models don’t consistently cite their sources and have even made up fake studies. Use the same media literacy skills you would apply to a Wikipedia article or a Google search, said Worland of the News Literacy Project. If you query an AI search engine or chatbot, check the AI-generated answers againstother reliable sources, such as newspapers, government or university websites or academic journals.

Help Desk: Making tech work for you

Help Desk is a destination built for readers looking to better understand and take control of the technology used in everyday life.

Take control: Sign up for The Tech Friend newsletter to get straight talk and advice on how to make your tech a force for good.

Tech tips to make your life easier: 10 tips and tricks to customize iOS 16 | 5 tips to make your gadget batteries last longer | How to get back control of a hacked social media account | How to avoid falling for and spreading misinformation online

Data and Privacy: A guide to every privacy setting you should change now. We have gone through thesettings for the most popular (and problematic) services to give you recommendations. Google | Amazon | Facebook | Venmo | Apple | Android

Ask a question: Send the Help Desk your personal technology questions.

Show more
ChevronDown
generic-newsletter-signup.png

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK