
Benchmarking
Calculating Value at Risk with Arithmetic Brownian Motion
Value at Risk (VaR) is an important quantitative risk metric for financial services and insurance institutions when judging the risk of loss for a potential investment. This use case implements a numerical solution of the stochastic differential equation (SDE) for an arithmetic Brownian motion (ABM) process and then uses the final distribution of instrument price at maturity to compute the VaR. Such processes are typically evaluated across a set of paths (typically millions) and over a number of time steps (typically the 252 stock market trading days in a year).
When running on the Signaloid Cloud Compute Engine (SCCE), the kernels implementing the ABM SDE can replace the usual approach of sampling followed by evaluation of each path for these sample inputs, replacing it with a direct computation on a representation of the probability distribution across paths. Thus, in a single computation over time steps, the code kernel running on SCCE can compute the same type of output distribution that would take a Monte Carlo simulation millions of iterations.
The arithmetic Brownian motion kernel running on the Signaloid Cloud Compute Engine with a single-threaded Signaloid C0Pro-XL core achieves runtimes 117x faster than an optimized C-language Monte-Carlo-based implementation of the same kernel running on an Amazon EC2 R7iz instance. With 99% confidence, a 30M-iteration Monte Carlo implementation will have the same accuracy as the Signaloid UxHw®-based version, yet the Signaloid UxHw-based version gives the speedup quoted above; if requiring higher confidence in the accuracy of the Monte Carlo compared to UxHw, the speedups of UxHw are even greater.
Benchmarking Methodology
Monte Carlo simulations work by statistical sampling and therefore each multi-iteration Monte Carlo run will result in a slightly different output distribution with slightly different statistics (mean, quantiles etc). By contrast, Signaloid's platform is deterministic and each run produces the same distribution and associated statistics for a given Signaloid C0 core type.
The performance improvements are calculated by comparing Signaloid's platform with a Monte Carlo simulation of a similar quality. First, we run a large Monte Carlo simulation (e.g., 20M iterations) on an AWS r7iz high-performance AWS instance: We use this result as a baseline or ground truth reference of output quality. Then we calculate the performance of Signaloid's technology, and compare it with the performance of a Monte Carlo iteration count where the output accuracy (with respect to the output of the ground truth reference) is as accurate in terms of basis points as the Signaloid-core-executed algorithm's output, with 99% confidence level.
Performance data based on Fall 2025 release of Signaloid's technology.