Signaloid Cloud Compute Engine
Use Case Family:
Engineering
Use Case
Finite Element Modeling
The finite element method (FEM) for the numerical solution of differential equations is a method with importance across a broad set of fields ranging from engineering to quantitative finance. This use case implements implements an uncertainty quantification of a simple 1D FEM model and compares it to an implementation performing the uncertainty quantification using Monte Carlo simulation.
When running on the Signaloid Compute Engine, the uncertainty quantified FEM, with minimal changes, sets some of its model parameters as probability distributions representing their uncertainty. This allows the code kernel running on the Signaloid Compute Engine to compute the same kind of distribution, in a single pass over the execution, whereas the Monte Carlo simulation requires first explicitly modifying the application kernel to implement Monte Carlo sample, iteration, and result aggregation and then running the modified implementation for thousands or hundreds of thousands of iterations.
For the the 1D FEM, an implementation on the Signaloid C0 processor runs 15.6x faster than an already fast C-language Monte-Carlo-based implementation of the same model running on an AWS r7iz high-performance instance.
Note 1:
The underlying distribution representations are not literal histograms: The distribution plots use an adaptive algorithm to render a mutually-consistent and human-interpretable depiction for both the Signaloid distribution representations and the Monte Carlo samples, to permit qualitative comparison.
Note 2:
Because Monte Carlo works by statistical sampling, each set of multi-iteration Monte Carlo runs (e.g., each time a 200k-iteration Monte Carlo is run) will result in a slightly different final distribution. By contrast, the results from Signaloid's platform are completely deterministic and yield the same distribution each time, for a given Signaloid C0 core type. The performance improvement over Monte Carlo results above show the performance speedup of running on Signaloid's platform, compared to running a Monte Carlo on an AWS r7iz high-performance AWS instance, for the same quality of distribution while accounting for the variations inherent in Monte Carlo. To compare the quality of distribution, we run a large Monte Carlo until convergence (e.g., 1M iterations) and use this as a baseline or ground truth reference for distribution quality (not for performance). We then compare performance of the Signaloid solution against a Monte Carlo iteration count for which the output distributions of 100 out of 100 repetitions are all at smaller Wasserstein distance (than the Signaloid-core-executed algorithm's output distribution) to the output distribution of the baseline reference. Intuitively, this analysis gives the Monte Carlo iteration count that results in an output distribution that is never worse than the Signaloid-core-executed computation's output distribution. 15.6x speedup achieved when running on a Signaloid C0Pro-S core. Performance data based on Spring 2024 release of Signaloid's technology.