7

Amazon Releases Fortuna, an Open-Source Library for ML Model Uncertainty Quantif...

 1 year ago
source link: https://www.infoq.com/news/2023/01/amazon-fortuna-uncertainty/?utm_campaign=infoq_content&utm_term=global
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Amazon Releases Fortuna, an Open-Source Library for ML Model Uncertainty Quantification

Jan 04, 2023 1 min read

AWS announced that Fortuna, an open-source toolkit for ML model uncertainty quantification, has been made generally available. Any trained neural network can be used with the calibration methods offered by Fortuna, such as conformal prediction, to produce calibrated uncertainty estimates.

There are numerous documented methods for estimating or calibrating the uncertainty of predictions, however current tools and libraries for quantifying uncertainty have a limited range and do not provide a comprehensive collection of methods. This has a large overhead and makes it difficult to incorporate uncertainty into production systems. Fortuna bridges this gap by compiling well-known techniques making them accessible to users through a standardized and user-friendly interface.

Fortuna offers three different usage modes, starting from uncertainty estimates, has minimal compatibility requirements, and is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression.

Starting from model outputs assumes an already trained model in some framework, and arrives at Fortuna with model outputs. This usage mode allows users to calibrate model outputs, estimate uncertainty, compute metrics and obtain conformal sets.

Amazon Fortuna also supports a number of Bayesian inference methods that can be applied to deep neural networks starting from Flax models. The library makes it easy to run benchmarks and will enable practitioners to build robust and reliable AI solutions by taking advantage of advanced uncertainty quantification techniques.

Another widely used library for estimating model uncertainty is scikit-learn, an open-source machine learning library for Python. It includes functions for cross-validation and bootstrapping, as well as support for building ensemble models. TensorFlow Probability also provides tools for estimating uncertainty. It is built on top of TensorFlow, and includes support for Bayesian neural networks and Monte Carlo methods and PyMC3, a library for probabilistic programming that allows users to build Bayesian models using a high-level programming interface.

Applications that call for making critical decisions depend on an accurate evaluation of the expected uncertainty. When there is ambiguity, it is possible to judge the precision of model predictions, defer to human judgment, or determine if a model may be utilized securely.

About the Author

Daniel Dominguez

Daniel Dominguez is an experienced engineer and MSc with a diverse educational background, having studied in Colombia, France, Canada, and the United States. He has over 14 years of experience in the internet and technology industry, working with companies ranging from startups in Silicon Valley to Fortune 500 corporations. Daniel holds a Software Product Management Specialization from the University of Alberta and a Machine Learning Specialization from the University of Washington. In addition to his professional work, Daniel is also a member of the editorial team for the AI/ML/Data Engineering community at InfoQ.

Show more

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK