Quantitative Finance

Quantitative Finance

Signaloid Cloud Compute Engine

Signaloid Cloud Compute Engine

December 2025

December 2025

Benchmarking

Calculating a Financial Instrument’s Value at Maturity with Arithmetic Brownian Motion

Modeling the future price evolution of a financial instrument based on past market data is a critical task for many financial institutions. This use case implements a numerical solution of the stochastic differential equation (SDE) for an arithmetic Brownian motion (ABM) process, typically calculated across a set of paths and over a number of time steps. The number of iterations can be over a million and the number of steps is typically the number of stock market trading days in a year (252).

When running on the Signaloid Cloud Compute Engine (SCCE), the kernels implementing the ABM SDE can replace the usual approach of sampling followed by the evaluation of each path for these sample inputs, replacing it with a direct computation on a representation of the probability distribution across paths. Thus, in a single computation over time steps, the code kernel running on SCCE can compute the same type of output distribution that would take a Monte Carlo simulation millions of iterations.

The arithmetic Brownian motion kernel running on the Signaloid Cloud Compute Engine with a single-threaded Signaloid C0Pro-XL+ core achieves runtimes 28x faster than an optimized C-language Monte-Carlo-based implementation of the same kernel running on an Amazon EC2 R7iz instance. With 95% confidence, a 7.9M-iteration Monte Carlo implementation will have the same accuracy as the Signaloid UxHw®-based version, yet the Signaloid UxHw-based version gives the speedup quoted above; if requiring higher confidence in the accuracy of the Monte Carlo compared to UxHw, the speedups of UxHw are even greater.



Key Performance Indicator

Key Performance Indicator

Key Performance Indicator

Signaloid Platform Solution

Signaloid Platform Solution

Signaloid Platform Solution

Competing Solution

Competing Solution

Competing Solution

Signaloid Benefit

Signaloid Benefit

Signaloid Benefit

Speed for the same uncertainty quantification accuracy.

Speed for the same uncertainty quantification accuracy.

Speed for the same uncertainty quantification accuracy.

Run existing non-Monte-Carlo code and use either the Signaloid Compute Engine's automated ingestion of distribution information, or use the Signaloid UxHw API to set program variables as probability distributions.

Run existing non-Monte-Carlo code and use either the Signaloid Compute Engine's automated ingestion of distribution information, or use the Signaloid UxHw API to set program variables as probability distributions.

Run existing non-Monte-Carlo code and use either the Signaloid Compute Engine's automated ingestion of distribution information, or use the Signaloid UxHw API to set program variables as probability distributions.

Run existing Monte Carlo code, or, starting from non-Monte-Carlo code, modify code to implement Monte Carlo sampling, iteration, and aggregation of the results from the Monte Carlo iterations of the computation.

Run existing Monte Carlo code, or, starting from non-Monte-Carlo code, modify code to implement Monte Carlo sampling, iteration, and aggregation of the results from the Monte Carlo iterations of the computation.

Run existing Monte Carlo code, or, starting from non-Monte-Carlo code, modify code to implement Monte Carlo sampling, iteration, and aggregation of the results from the Monte Carlo iterations of the computation.

28.8x faster execution time than 7.9M-iteration Monte Carlo, while achieving same fidelity of full distribution result.

28.8x faster execution time than 7.9M-iteration Monte Carlo, while achieving same fidelity of full distribution result.

28.8x faster execution time than 7.9M-iteration Monte Carlo, while achieving same fidelity of full distribution result.

The underlying distribution representations are not literal histograms: The distribution plots use an adaptive algorithm to render a mutually-consistent and human-interpretable depiction for both the Signaloid distribution representations and the Monte Carlo samples, to permit qualitative comparison.

Plot of output distribution when running on a Signaloid C0Pro-XL+ core that provides the 28.8x speedup.

Plot of the output of an 7.9M-iteration Monte Carlo for this use case. This Monte Carlo iteration count provides the same or better Wasserstein distance to ground truth (20M-iteration) Monte Carlo as execution on a Signaloid C0Pro-XL+ core (which is 28.8x faster).

Plot of ground truth (20M-iteration) Monte Carlo.

Benchmarking Methodology

Monte Carlo simulations work by statistical sampling and therefore each multi-iteration Monte Carlo run will result in a slightly different output distribution. By contrast, Signaloid's platform is deterministic and each run produces the same distribution for a given Signaloid C0 core type.

The performance improvements are calculated by comparing Signaloid's platform with a Monte Carlo simulation of a similar quality of distribution. First, we run a large Monte Carlo simulation (about 50M iterations) on an AWS r7iz high-performance AWS instance: We use this distribution result as a baseline or ground truth reference of distribution quality. Then we calculate the performance of Signaloid's technology, and compare it with the performance of a Monte Carlo iteration count where the output distribution's Wasserstein distance (to the output distribution of the ground truth reference) is as accurate as the Signaloid-core-executed algorithm's output distribution, with 95% confidence level.

Performance data based on Fall 2025 release of Signaloid's technology.

Relevant Signaloid Solutions

Schedule a Demo Call
Request Whitepaper
Schedule a Demo Call
Request Whitepaper
Schedule a Demo Call
Request Whitepaper