Using UxHw Technology to Quantify Uncertainty of AI/ML Model Outputs

General

General

Uncertainty quantification in artificial intelligence and machine learning (AI/ML) is a technique used to give practitioners and stakeholders a clear sense of how much they should trust the output of an AI/ML model. The result of uncertainty quantification of an AI/ML model tells the user of the model the range of different outputs a model could generate for a given distribution of inputs, as well as which of those outputs are more likely to occur. Today, such uncertainty quantification is usually performed using what are called Monte Carlo methods. This technology explainer outlines how organizations deal with uncertainty of model predictions today and describes how developers and organizations can use Signaloid's UxHw technology to achieve automated uncertainty quantification for AI/ML models.

Why It Matters

Quantifying the uncertainty of the outputs of AI/ML models is an important component to understanding how much the outputs of those models can be trusted. The trust which decision makers should place in the output of a model in turn determines the decisions they may make based on the model. Signaloid's UxHw technology makes it easy to implement uncertainty quantification for AI/ML models (e.g., pre-trained models in ONNX format) and can be orders of magnitude faster to implement, as well as faster and cheaper to run, than the alternatives.

The Technical Details

Quantifying the uncertainty of predictions from an AI/ML model is important for making decisions based on the model. It is particularly important if an AI/ML model is used in a fully-automated system (such as a self-driving car or an automated trading system) where decisions are made by a computer system and where mistakes can be costly or fatal. It is also important in a human-in-the-loop system such as a medical diagnosis and prognosis system, where decisions are made by humans based on results from an AI/ML model.

Practitioners often use either Monte Carlo methods or approximations that use mathematical simplifications to obtain an estimate of the uncertainty of the output of a predictive model. The Monte Carlo method is computationally expensive and involves running a model repeatedly across slightly different inputs and aggregating the set of outputs so computed. In some special cases, it is possible to derive closed-form analytic solutions for the uncertainty of the output of a model (or an approximation of it), as a function of the uncertainties of its inputs. Using approximations can lead to incorrect estimates of uncertainty and hence unnecessarily conservative (or unnecessarily optimistic) decisions based on AI/ML model outputs.

Computing platforms such as the Signaloid Cloud Compute Engine (SCCE) and Signaloid's hardware modules, which implement Signaloid's UxHw technology, allow applications to perform arithmetic on probability distributions, with the same ease with which they perform arithmetic on integer and floating-point data. This capability makes it easy to implement uncertainty quantification for AI/ML model runtime systems that run on hardware implementing Signaloid's UxHw technology and makes those uncertainty quantification analyses much faster to run. One relevant recent example [1] demonstrated 100-fold speedup when running on SCCE compared to running traditional Monte Carlo analyses on a high-end Intel Xeon-based server that had similar hardware resources to SCCE. Developers can run their pre-trained ONNX models over Signaloid's cloud platform or hardware compute modules, or can build tools to train models that exploit Signaloid's UxHw technology to achieve model parameters as probability distributions.

The Takeaway

Uncertainty quantification of AI/ML models gives practitioners a sense of how much they should trust the output of a model. Practitioners often use either Monte Carlo methods or approximations that use mathematical simplifications, to obtain an estimate of the uncertainty of the output of a predictive model. But Monte Carlo methods are slow while approximations can be inaccurate. Signaloid's UxHw technology allows applications to perform arithmetic on probability distributions, making it easy to implement uncertainty quantification for AI/ML model runtime systems and makes those uncertainty quantification analyses much faster to run.

References
  1. Janith Petangoda, Chatura Samarakoon, and Phillip Stanley-Marbell. "Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures." In NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms. https://openreview.net/forum?id=zKt7uVOttG

Schedule a Demo Call
Request Whitepaper
Schedule a Demo Call
Request Whitepaper
Schedule a Demo Call
Request Whitepaper