3

AI is fickle and fallible. Here's why that's confounding your expectations. - Jo...

 5 months ago
source link: https://bernoff.com/blog/ai-is-fickle-and-fallible-heres-why-thats-confounding-your-expectations
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

AI is fickle and fallible. Here’s why that’s confounding your expectations.

ByJosh Bernoff March 25, 2024March 25, 2024
image-17-693x1024.png

People have so much trouble with the flaws in AI tools because they are so different from everything else we do with our computing devices.

Consider any productivity application you use: a spreadsheet, a programming language, an app, anything, doesn’t matter.

All of these applications have two qualities:

  1. Determinism. If you enter the same inputs, you get the same result. If you run a piece of software code twice, it gives the same result each time. If you calculate with a spreadsheet and input the same numbers, it gives the same answer. The only exceptions are when something changes in the environment — a weather app’s answers will vary based on new information about barometer readings, for example. But in general, the same input yields the same output.
  2. Infallibility. The only errors are human-generated. If your code fails to compile, you almost certainly made an error. If your spreadsheet says your net worth in 2030 will be $140,000,000, you probably misplaced a decimal point somewhere. We take it for granted: if there’s a mistake, somebody made it.

AI chatbots violate both of those rules.

You can ask a chatbot the same question twice and get two different answers. They’ll be similar, but they may not be identical. Chatbots are fickle.

A chatbot can generate false information, too — so called “hallucinations.” Like when it says that a peregrine falcon is the fastest marine mammal, or that I went to Harvard.

Unlike other software, there is rarely an audit trail for these problems. It’s opaque. You likely can’t explain why an AI made a mistake or gave a different answer, and you need to be an expert to even try to “fix” it.

This is completely at odds with every other productivity tool we use.

Chatbots’ limitations are more like people’s limitations

Imagine that you had a colleague. The colleague was extremely productive and helpful, and had a phenomenal memory.

However, that colleague had flaws. First, if you asked them a question twice, they might give two different answers. Those answers would be substantially the same, but not identical.

And second, sometimes that colleague could be mistaken.

Would you fire them? What a waste. No, you’d probably keep working with them, but check their work before using it. And you’d try to teach them to get better.

ChatGPT and other AI-based chatbots are like that colleague. And they work for a whole lot cheaper than a human assistant.

Adjust your expectations

If you have mentally written off AI, is it because you expect it to be deterministic and infallible, like all the other software tools you use?

Change your expectations.

If you insist on treating it like every other piece of software or app that you use, you’ll be constantly disappointed. But if you treat it as a flawed and not completely trustworthy but extremely helpful assistant, you’ll be on the path to getting value out of it.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK