2

Artificial intelligence needs to be trained on culturally diverse datasets to av...

 7 months ago
source link: https://techxplore.com/news/2024-02-artificial-intelligence-culturally-diverse-datasets.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

February 14, 2024

Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias

by Vered Shwartz, The Conversation

culturally diverse database
Credit: AI-generated image

Large language models (LLMs) are deep learning artificial intelligence programs, like OpenAI's ChatGPT. The capabilities of LLMs have developed into quite a wide range, from writing fluent essays, through coding to creative writing. Millions of people worldwide use LLMs, and it would not be an exaggeration to say these technologies are transforming work, education and society.

LLMs are trained by reading massive amounts of texts and learning to recognize and mimic patterns in the data. This allows them to generate coherent and human-like text on virtually any topic.

Because the internet is still predominantly English—59 percent of all websites were in English as of January 2023—LLMs are primarily trained on English text. In addition, the vast majority of the English text online comes from users based in the United States, home to 300 million English speakers.

Learning about the world from English texts written by U.S.-based web users, LLMs speak Standard American English and have a narrow western, North American, or even U.S.-centric, lens.

Model bias

In 2023, ChatGPT, upon learning about a couple dining in a restaurant in Madrid and tipping four percent, suggested they were frugal, on a tight budget or didn't like the service. By default, ChatGPT followed the North American standard of a 15 to 25 percent tip, ignoring the Spanish norm not to tip.

As of early 2024, ChatGPT correctly cites cultural differences when prompted to judge the appropriateness of a tip. It's unclear if this capability emerged from training a newer version of the model on more data—after all, the web is full of tipping guides in English—or whether OpenAI patched this particular behavior.

Still, other examples remain that uncover ChatGPT's implicit cultural assumptions. For example, prompted with a story about guests showing up for dinner at 8:30 p.m., it suggested reasons that the guests were late, although the time of the invitation was not mentioned. Again, ChatGPT likely assumed they were invited for a standard North American 6 p.m. dinner.

In May 2023, researchers from the University of Copenhagen quantified this effect by prompting LLMs with the Hofstede Culture Survey, which measures human values in different countries. Shortly after, researchers from AI start-up company Anthropic used the World Values Survey to do the same. Both works concluded that LLMs exhibit strong alignment with American culture.

A similar phenomenon is encountered when asking DALL-E 3, an image generation model trained on pairs of images and their captions, to generate an image of a breakfast. This model, which was trained on main images from Western countries, generated images of pancakes, bacon, and eggs.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK