2

The many types of AI hallucinations (Clue - it's not that simple!)

 1 year ago
source link: https://diginomica.com/many-types-ai-hallucinations-clue-its-not-simple
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The many types of AI hallucinations (Clue

hallucination

Enterprises and governments have struggled with data quality and accuracy since before the dawn of computers. But concerns about AI hallucinations have picked up steam with the rapid adoption of Large Language Models (LLMs), like ChatGPT, that can make up believable bullshit on an impressive scale. Not only that, but they can convey these with a computer-generated straight face. 

Jon Reed recently reported tech leaders demanding zero-tolerance for hallucinations. The problem is there are many types of hallucinations we need to address.

A public demo of Google’s new Bard Chatbot proudly explained that the James Webb telescope discovered the first extraterrestrial planet. Google’s stock price took a $100 billion hit once Astronomers corrected the record. 

Other AI hallucinations are emerging that could threaten lives and careers. OpenAI’s ChatGPT falsely reported that Brian Hood, mayor of Victoria, Australia, had been imprisoned for accepting foreign bribes in the early 2000s. He was the one that notified authorities about the crime of others but was never charged himself. Hood demanded OpenAI fix the issue or face the world’s first AI defamation lawsuit. 

The rapid proliferation of hallucinating AIs could also amplify the scale of tragic incidents, such as  the UK Post Office Scandal that started in 1999. Hundreds of local postmasters were wrongly accused of theft, and many were even convicted after a botched IT update. The inquiry is still ongoing two decades later. Many of the accused are still trying to get their life back after the tragedy. 

Hallucinating references

One suggested fix is to program generative AI to directly cite relevant sources for their information. But there are many ways that this, too, can go awry. 

In the US, law Professor Jonathan Turley was shocked to discover that ChatGPT had falsely accused him of making inappropriate comments to students on a class trip. But he had never taken a trip with any students in 35 years of teaching. The chatbot even cited a 2018 Washington Post article that did not exist. Turley wrote:

So, the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.

Despite such problems, some high-profile leaders have pushed for its expanded use. The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just “digital misinformation” but “political polarization.””

Hallucinating truth

In response to concerns about generative AI, hundreds of luminaries have called for an AI pause on developing large language models bigger than GPT-4. There is some wisdom in laying out a framework for training, creating, and providing these things before scaling them up. 

But critics point out that many signatories are just buying time for their own grandiose AI projects. Shortly after signing the AI-pause letter, Elon Musk explained his plans for TruthGPT to Fox News as “a maximum truth-seeking AI that tries to understand the nature of the universe.” 

This contrasts with a new California law banning Tesla and other automakers from deceptively naming, referring to, or marketing a car as self-driving. Videos of Musk extolling the capabilities of full self-driving features were introduced into a lawsuit involving a fatality. Musk’s lawyers argued these were deep fakes generated by AI purporting to show him saying and doing things he never actually did.

Hallucinating the end of the world

Many AI pioneers suggest the biggest concern is that larger AI models pose an existential threat to the human race rather than more pressing issues like the future of copyright, work, data rights, and transparent algorithmic decision-making. Writing for the New Statesman, Will Dunn observed:

The best thing a company can do to build shareholder value today is to create a terrifying and unpredictable new threat to humanity – or to say that’s what they’ve done.

In 2016, Facebook’s stock price surged 4.5% in a day after CEO Mark Zuckerberg apologized for subverting democracy. This year, Microsoft stock similarly surged 1.5% the day before its AI partner, Sam Altman, CEO of OpenAI, testified before the US Senate that AI could “cause significant harm to the world.”  

Hallucinating about saving the world

Naomi Klein, professor of Climate Justice at the University of British Columbia, argues that the real hallucination in all of this is the personification of AI intelligences. This feeds the mythology that AI is human and is here to help us. She wrote:

By appropriating a word [hallucination] commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?

Generative proponents argue that generative AI may end poverty, cure disease, and make governments more responsive. But Klein counters:

These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year.

She believes these benevolent predictions are just cover stories for what may turn out to be the largest and most consequential theft in human history. The real issue is about the potential for these tools to scrape the total sum of human knowledge and wall it off in proprietary products. 

Hallucinating intelligence

Maybe the underlying problem is the way we characterize these tools. Calling them “artificial intelligence” obscures the fact that they are trained on the conversations of humans at scale. VR Pioneer Jaron Lanier argues:

My difference with my colleagues is that I think the way we characterize the technology can have an influence on our options and abilities to handle it. And I think treating AI as this new, alien intelligence reduces our choices and has a way of paralyzing us. 

An alternate take is to see it as a new form of social collaboration, where it’s just made of us. It’s a giant mash-up of human expression, opens up channels for addressing issues and makes us more sane and makes us more competent. So, I make a pragmatic argument to not think of the new technologies as alien intelligences, but instead as human social collaborations.

 My take

For the past week, I have been hallucinating about a large box outside my flat that was being returned to an online superstore. The problem is that the delivery company reported that they had taken possession of this box when I could plainly see it sitting outside my flat for a week. The retailer was too happy to issue a refund, but the box remained. My wife would even agree with me, although the cleaning man pretended not to notice it, so we did not get a nasty note from the management about large things in the hallway. 

I am not sure how this hallucination occurred in the first place. On further investigation, I learned that the box exceeded the shipper’s size limits. Perhaps the delivery driver saw this and ticked the wrong box. I will never know.

What I do know is that every time I called the shipper, a helpful AI bot declared it was in transit at their facility and cut me off. I tried calling to talk to a person, and the new automated system also cut me off when it decided my package was in transit. Then every day, the bot sent me ‘helpful’ emails to tell me they had my package somewhere in their facility.

Eventually, the online superstore sent another firm specializing in large shipments to collect the package after a long interchange. The agent tried to tell me they had the package, and I had my refund, so I could rest assured. The tone only changed when I insistently told her that someone had obviously lied. 

Businesses and governments will continue to automate more processes using these large language models that either hallucinate themselves or believe the hallucinations of others. It is up to us to build better backchannels and processes for actually speaking with humans when things go awry. 

There was one line that struck me at the end of the Jaron Lanier interview as a kind of salve for these new hallucinations: 

Faith is not fundamentally rational, but there is a pragmatic argument, as I keep on repeating, to placing your faith in other people instead of machines. If you care about people at all, if you want people to survive, you have to place your faith in the sentience of them instead of in machines as a pragmatic matter, not as a matter of absolute truth.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK