4

OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims

 1 year ago
source link: https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

ChatGPT asked to answer for itself —

OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims

ChatGPT falsely claimed a mayor went to prison.

Ashley Belanger - 4/5/2023, 4:44 PM

OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims

It was only a matter of time before ChatGPT—an artificial intelligence tool that generates responses based on user text prompts—was threatened with its first defamation lawsuit. That happened last month, Reuters reported today, when an Australian regional mayor, Brian Hood, sent a letter on March 21 to the tool’s developer, OpenAI, announcing his plan to sue the company for ChatGPT’s alleged role in spreading false claims that he had gone to prison for bribery.

To avoid the landmark lawsuit, Hood gave OpenAI 28 days to modify ChatGPT’s responses and stop the tool from spouting disinformation.

According to Hood’s legal team, ChatGPT could seriously damage the mayor’s reputation by falsely claiming that Hood had been convicted for taking part in a foreign bribery scandal in the early 2000s while working for a subsidiary of the Reserve Bank of Australia. Hood had worked for a subsidiary, Note Printing Australia, but rather than being found guilty of bribery, Hood was the one who notified authorities about the bribes. Reuters reported that Hood was never charged with any crimes, but ChatGPT seems to have confused the facts when generating some responses to text prompts inquiring about Hood's history.

OpenAI did not immediately respond to Ars’ request for comment.

Ars attempted to replicate the error using ChatGPT, though, and it seems possible that OpenAI has fixed the errors as Hood's legal team has directed. When Ars asked ChatGPT if Hood served prison time for bribery, ChatGPT responded that Hood “has not served any prison time” and clarified that “there is no information available online to suggest that he has been convicted of any criminal offense.” Ars then asked if Hood had ever been charged with bribery, and ChatGPT responded, “I do not have any information indicating that Brian Hood, the current mayor of Hepburn Shire in Victoria, Australia, has been charged with bribery.”

Ars could not immediately reach Hood’s legal team to find out which text prompts generated the alleged defamatory claims or to confirm if OpenAI had responded to confirm that the error had been fixed. The legal team was still waiting for that response at the time that Reuters' report published early this morning.

Advertisement

Hood’s lawyer, James Naughton, a partner at Gordon Legal, told Reuters that Hood’s reputation is “central to his role” as an elected official known for “shining a light on corporate misconduct.” If AI tools like ChatGPT threaten to damage that reputation, Naughton told Reuters, “it makes a difference to him." That's why the landmark defamation lawsuit could be his only course of action if the alleged ChatGPT-generated errors are not corrected, he said.

It's unclear to Hood how many people using ChatGPT were exposed to the disinformation. Naughton told Reuters that the defamatory statements were so serious that Hood could claim more than $130,000 in defamation damages under Australian law.

Whether companies like OpenAI could be held liable for defamation is still debatable. It’s possible that companies could add sufficient disclaimers to products to avoid such liability, and they could then pass the liability on to users, who could be found to be negligently or intentionally spreading false claims while knowing that ChatGPT cannot always be trusted.

Australia has recently drawn criticism for how it has reviewed defamation claims in the digital age. In 2020, Australia moved to redraft its defamation laws after a high court ruling found that publishers using social media platforms like Facebook should be held liable for defamatory third-party comments on their pages, CNBC reported in 2021. That is contrary to laws providing immunity shields for platforms, such as Section 230 in the US.

At that time, Australia considered the question of whether online publishers should be liable for defamatory statements made by commenters in online forums “one of the most complex to address,” with “complications beyond defamation law alone.” By the end of last year, Australian attorneys general were pushing new reforms to ensure that publishers could avoid any liability, The Guardian reported.

Now it looks like new generative AI tools like ChatGPT that publish potentially defamatory content will likely pose the next complex question—one that regulators, who are just now wrapping their heads around publisher liability on social media, may not yet be prepared to address.

Naughton told Reuters that if Hood’s lawsuit proceeds, it would accuse OpenAI of “giving users a false sense of accuracy by failing to include footnotes” and failing to inform users how ChatGPT's algorithm works to come up with answers that may not be completely accurate. AI ethics experts have urged regulators to ensure that companies like OpenAI are more transparent about how AI tools work.

If OpenAI doesn't adequately respond to Hood's concerns, his lawsuit could proceed before the laws clarify who is responsible for alleged AI-generated defamation.

"It would potentially be a landmark moment in the sense that it's applying this defamation law to a new area of artificial intelligence and publication in the IT space," Naughton told Reuters.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK