4

The AI black box problem

 2 years ago
source link: https://www.thinkautomation.com/bots-and-ai/the-ai-black-box-problem/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The AI black box problem


david-jorre-oQbLeq4nOek-unsplash-scaled-750x300.jpg

One of the biggest hurdles that AI faces today is public trust and acceptance. People (often understandably) struggle to trust the decisions and answers that AI-powered tools provide.  

The AI black box problem feeds this hurdle further. AI doesn’t show its workings. It doesn’t explicitly share how and why it reaches its conclusions. All we know is that some omniscient algorithm has spoken. 

And until AI programmers can remove these layers of obfuscation, there will always be an air of discomfort around trusting the technology. Here, we explain the AI black box, what causes it, and why it’s a concern.  


The AI black box 

In computing, a ‘black box’ is a device, system or program that allows you to see the input and output, but gives no view of the processes and workings between. The AI black box, then, refers to the fact that with most AI-based tools, we don’t know how they do what they do.  

In other words, we know the question or data the AI tool starts with (the input). For example, photos of birds. We also know the answer it produces (the output). For example, labelling the pictures of birds as ‘birds’. 

But thanks to the AI black box problem, we have no idea how the tool turned the input into the output. Which is fine, until it produces an unexpected, incorrect, or problematic answer. 


Why the AI black box exists 

So, what causes the AI black box problem? The most common tools to suffer from the black box problem are those that use artificial neural networks and/or deep learning.  

Artificial neural networks consist of hidden layers of nodes. These nodes each process the given input and pass their output to the next layer of nodes. Deep learning is a huge artificial neural network, with many of these hidden layers, and it ‘learns’ on its own by recognising patterns.  

And this can get infinitely complicated. We can’t see what the nodes have ‘learned’. We don’t see the output between layers, only the conclusion. So, we can’t know how the nodes are analysing the data, and we’re facing the AI black box. 


The mystery of our minds 

Beyond the mysterious nature of these tools, there’s also the mysterious nature of our brains — which is what AI tools ultimately try to replicate. We don’t know exactly how our brains work — there are too many neurons and synapses to get a full picture of what’s happening.  

It’s not crystal clear how, at a base level, we come to our decisions. How, then can we expect to understand how AI-powered tools do it? 

We might be able to trust humans to make these decisions by looking at the reliability of their results. But this isn’t a courtesy extended to AI tools. 


The AI black box problem 

So, we know what the AI black box is and what causes it. But why is it a problem?  

As AI functionality spreads into more of our tools, the impact of its decisions become more serious. AI functions are informing police, doctors, banks. They play a role in deciding if you’ll get that loan, or if you need X treatment. You could even find the police on your doorstep for questioning after a facial recognition AI identifies you as a criminal.  

With such an impact, there are ethical concerns that arise from ignoring the AI black box problem. Because just like humans, AI can make mistakes. AI technology doesn’t come with a moral code. It doesn’t ‘understand’ the output it provides the same way a human does. When an AI produces a biased result, it won’t notice. So, humans must instead — and that’s difficult to do when we can’t understand the reasoning behind the result. 

How then, can we trust that an AI decision is the best one? Without this trust, it’s difficult to accept AI. People won’t be comfortable with its use until its inner machinations are explainable.  


Solving the AI black box problem 

With the AI black box problem becoming an increasing concern, AI developers are now turning their attention to solving it.   

The answer lies with explainable AI. Explainable AI refers to, as the name suggests, AI tools that produce results which a human can understand and explain.  

Until such functionality becomes available, though, the black box problem provides a reason to remain cautious of AI. AI-powered decisions should act as suggestions for human decisions, not ultimate answers.  


Useful links

Post navigation


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK