
Benchmarking
Uncertainty Quantification of Cutting Stress Modeling for Metal Alloys
Precipitate dislocation models provide metallurgists with insight into a metal alloy’s strength. Calculating an estimated cutting stress affected by uncertainty in the material’s measured properties traditionally involves implementing Monte Carlo simulations.
When running on the Signaloid Cloud Compute Engine (SCCE), the kernels implementing the calculation can replace the usual approach of sampling followed by evaluation of the kernel based on these sample inputs, replacing it with a direct computation on representations of the probability distributions of the inputs. Thus, in a single computation, the code kernel running on SCCE can compute the same type of output distribution that would take a Monte Carlo simulation thousands or hundreds of thousands of iterations.
The kernels implementing the calculation, when running on the Signaloid Cloud Compute Engine with a single-threaded Signaloid C0Pro-XS core, achieves runtimes 34x faster than an optimized C-language Monte-Carlo-based implementation of the same kernel running on an Amazon EC2 R7iz instance. With 95% confidence, a 4k-iteration Monte Carlo implementation will have the same accuracy as the Signaloid UxHw®-based version, yet the Signaloid UxHw-based version gives the speedup quoted above; if requiring higher confidence in the accuracy of the Monte Carlo compared to UxHw, the speedups of UxHw are even greater.
The plots here present the distribution of cutting stress of a material calculated from the APB energy, mean particle radius, volume fraction, shear modulus, Taylor factor, and the Burgers vector magnitude, each specified with an appropriate distribution to represent their degree of measurement or epistemic uncertainty.
Plot of output distribution when running on Signaloid C0Pro-XS core that provides the 34.3x speedup.
Plot of the output of an 3.9k-iteration Monte Carlo for this use case. This Monte Carlo iteration count provides the same or better Wasserstein distance to ground truth (20M-iteration) Monte Carlo as execution on a Signaloid C0Pro-XS core (which is 34.3x faster).
Plot of ground truth (20M-iteration) Monte Carlo.
Benchmarking Methodology
Monte Carlo simulations work by statistical sampling and therefore each multi-iteration Monte Carlo run will result in a slightly different output distribution. By contrast, Signaloid's platform is deterministic and each run produces the same distribution for a given Signaloid C0 core type.
The performance improvements are calculated by comparing Signaloid's platform with a Monte Carlo simulation of a similar quality of distribution. First, we run a large Monte Carlo simulation (about 50M iterations) on an AWS r7iz high-performance AWS instance: We use this distribution result as a baseline or ground truth reference of distribution quality. Then we calculate the performance of Signaloid's technology, and compare it with the performance of a Monte Carlo iteration count where the output distribution's Wasserstein distance (to the output distribution of the ground truth reference) is as accurate as the Signaloid-core-executed algorithm's output distribution, with 95% confidence level.
Performance data based on Fall 2025 release of Signaloid's technology.


