4

Fiddler Labs opens AI 'black box' to solve bias problem and enable compliance

 3 years ago
source link: https://siliconangle.com/2021/06/16/fiddler-labs-opens-ai-black-box-solve-bias-problem-enable-compliance-awsshowcase2q21/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Fiddler Labs opens AI 'black box' to solve bias problem and enable compliance
Krishna-Gade-Amit-Paka-AWS-Startup-Showcase-20217721.jpg
AI

The bias in artificial intelligence has gained exponential attention as large companies are accused of unfair decisions resulting from their sophisticated algorithms. Twitter, Facebook, Apple and Goldman Sachs are just some of those that found themselves involved in such discussions.

The problem with AI is that the models look like “black boxes.” Unlike ordinary software code, you cannot actually open them, read their patterns and understand how they are doing, according to Krishna Gade (pictured, left), founder and chief executive officer of “explainable AI” company Fiddler Labs Inc.

This leaves room for problems, Gade added. “You need a way to innovate to actually explain it, you need to understand it and you need to monitor it,” he said. “And this is where the model performance management system like Fiddler can help you look into that black box.”

Gade and Amit Paka (right), founder and chief product officer of Fiddler, spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the AWS Startup Showcase: The Next Big Things in AI, Security & Life Sciences. They discussed the issues of bias and unfairness surrounding machine learning and AI models, why explaining and monitoring these models are key,and Fiddler Labs’ solutions to this environment. (* Disclosure below.)

Platform for model explainability

There is not just one cause for bias in AI, which is why it is so hard to get rid of. One of the major problems is insufficient training data, where some demographic groups are absent or underrepresented. Another is that everyone carries conscious or unconscious biases, which find their way into the data and end up captured by machine learning models.

Regardless of the reason for the bias, it is necessary to open the AI ​​black box ​​to try to prove the model at different granularities to really understand how it is behaving, according to Paka. This is the goal of Fiddler’s platform for model explainability, modern monitoring and bias detection.

“For example, why is my model making high-risk predictions to loans made in California or loans made to all men? Was it loans made to all women?” he said. “And it could also be at the global level. What are the key data factors important to my model?”

To do this, the model performance management system sits at the heart of the machine learning workflow and keeps track of all parts of the models, such as the data that flows through the ML life cycle, the models being created and deployed and how they are performing.

The idea is to give business a centralized way to manage all this information in one place. It sets a kind of oversight tool from a compliance standpoint and from an operational standpoint of what is happening with the models in production.

“Imagine you’re a bank; you’re probably creating hundreds of these models for a variety of use cases, credit risk, fraud, anti-money-laundering,” Gade said. “How are you going to know which models are actually working very well? Which models are stale? Which models are expired? How do you know which models are underperforming? Are you getting alerts?”

Fiddler provides these performance management and governance services in a visual, multidashboard interface that can be monitored by various teams across the enterprise, such as development and operations.

Monitoring regulatory requirements continuously

Fiddler’s solution also enables companies to maintain compliance while innovating. A bank building a new credit risk model, for example, needs to validate the model even before implementing it. To do this, the bank must explain the model and submit a report to the internal risk management team, which will review it and then potentially share it with the audit team and maintain a record for regulatory purposes.

“Fiddler helps them create these reports, keep all of these reports in one place, and then once the model is deployed, it basically can help them monitor these models continuously,” Gade explained.

This compliance ingredient has gained even more importance recently with a proposal for an AI regulation in Europe. Following its 2020 AI White Paper, the European Commission launched in April this year a package of proposing rules and actions, with a focus on trust and transparency, which aims to transform Europe into the global hub for trustworthy artificial intelligence.

“[The proposed regulation] classifies risk within applications. And specifically for high-risk applications, they proposed new oversight; and that’s mandating explainability, helping teams understand how the models are working and monitoring to ensure that when a model is trained for high accuracy, it maintains that,” Paka said. “Those two mandatory needs of high-risk application, those are the ones that are solved by Fiddler.”

One differentiator of Fiddler’s platform is that it can be useful across a variety of AI problems, from financial services to retail, from advertising to human resources, healthcare and so on, according to Gade. “We have found a lot of commonalities around how data scientists are solving these problems across these industries, and we’ve created a system that can be plugged into their workflows,” he said.

The company brings all of those models into one sort of umbrella that supports a variety of heterogeneous types of models so MLOps can get this oversight.

“And that is a very, very hard technical problem to solve: to be able to ingest and digest all these different types of monotypes and then provide a single pane of glass in terms of how the model is performing, explaining the model [and] tracking the model life cycle throughout its existence,” Gade said.

Stay tuned for the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the AWS Startup Showcase: The Next Big Things in AI, Security & Life Sciences. (* Disclosure: Fiddler Labs sponsored this segment of theCUBE. Neither Fiddler Labs nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and soon to be Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

We are holding our second cloud startup showcase on June 16. Click here to join the free and open Startup Showcase event.

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you. Thanks for taking the time to read this post. Looking forward to seeing you at the event and in theCUBE Club.


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK