4

Don’t build trust with AI, calibrate it

 1 year ago
source link: https://uxdesign.cc/dont-build-trust-with-ai-calibrate-it-2889a5740e16
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Don’t build trust with AI, calibrate it

Designing AI systems with the right level of trust

Published in
6 min read5 days ago

AI operates on probabilities and uncertainties. Whether it’s object recognition, price prediction, or a Large Language Model (LLM), AI can make mistakes.

The right amount of trust is a key ingredient in a successful AI-empowered system. Users should trust the AI enough to extract value from the tech, but not so much that they’re blinded to its potential errors.

When designing for AI, we should carefully calibrate trust instead of making users rely on the system blindly.

Diagram visualising calibrated trust in the middle of the weights: not too much and not too little
Users should trust the AI enough to extract value from the tech, but not so much that they’re blinded to its potential errors.

How people trust AI systems

According to Harvard researchers’ meta-analysis, people often place too much trust in AI systems when presented with final predictions. And just providing explanations for AI predictions doesn’t solve the problem. It serves as a signal for AI competence rather than drawing attention to AI mistakes.

Sure, a few wrong turns won’t ruin your journey if you’re just seeking movie or music recommendations. But in critical decision-making situations, this approach can make users “co-dependent on AI and susceptible to AI mistakes.”

The trust issue becomes even more critical with LLM chatbots. These interactions naturally create more trust and even emotional connection.

Evaluate risk

Every person involved in the creation of AI at any step is responsible for its impact. The first thing that AI designers have to do is to understand the risk of their solution.

How critical is it if AI makes a mistake, and how likely is it to happen?

Graph showing that high-stake risks that are likely to happen are dangerous and should be mitigated before an AI solution can be used.
Assess the risks and mitigate the risks of AI implementation

If the available training data is biased, messy, subjective, or discriminatory it can almost certainly lead to harmful results. The team's first priority, in that case, should be to prevent this.

For example, a recidivism risk assessment algorithm used in some US states is biased against black people. It predicts a score of the likelihood of committing a future crime and often rates black…


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK