Signaloid Cloud Compute Engine

Use Case Family:

Quantitative Finance

Use Case

Path-Dependent Pricing Stochastic Processes with Correlated Brownian Motion

Modeling the price evolution of a financial instrument in the presence of fluctuating market conditions is a critical task for many financial institutions. This use case implements a numerical solution of the stochastic differential equation (SDE) for a correlated brownian motion (CBM) process. Numerical solution of the correlated brownian motion SDE is often done using a Monte Carlo simulation over a set of paths and across a number of steps. The number of paths is typically thousands to hundreds of thousands and the number of steps is typically the number of stock market trading days in a year (252), or, for certain model scenarios that permit certain simplifications, a single time step is used.

When running on the Signaloid Compute Engine, the kernels implementing the CBM SDE can replace their use of individual samples from some distribution for each path, with a direct computation on a representation of the probability distribution across paths. This allows the code kernel running on the Signaloid Compute Engine to compute the same kind of distribution, in a single pass over the time steps, as the Monte Carlo simulation across the thousands or hundreds of thousands of paths.

For the CBM SDE, an implementation on the Signaloid C0 processor runs 1.9x faster than an already fast C-language Monte-Carlo-based implementation of the same model running on an AWS r7iz high-performance instance.

Key Performance Indicator

Key Performance Indicator

Key Performance Indicator

Signaloid Platform Solution

Signaloid Platform Solution

Signaloid Platform Solution

Competing Solution

Competing Solution

Competing Solution

Signaloid Benefit

Signaloid Benefit

Signaloid Benefit

Speed for the same uncertainty quantification accuracy.

Speed for the same uncertainty quantification accuracy.

Speed for the same uncertainty quantification accuracy.

Run existing non-Monte-Carlo code and use either the Signaloid Compute Engine's automated ingestion of distribution information, or use the Signaloid UxHw API to set program variables as probability distributions.

Run existing non-Monte-Carlo code and use either the Signaloid Compute Engine's automated ingestion of distribution information, or use the Signaloid UxHw API to set program variables as probability distributions.

Run existing non-Monte-Carlo code and use either the Signaloid Compute Engine's automated ingestion of distribution information, or use the Signaloid UxHw API to set program variables as probability distributions.

Run existing Monte Carlo code, or, starting from non-Monte-Carlo code, modify code to implement Monte Carlo sampling, iteration, and aggregation of the results from the Monte Carlo iterations of the computation.

Run existing Monte Carlo code, or, starting from non-Monte-Carlo code, modify code to implement Monte Carlo sampling, iteration, and aggregation of the results from the Monte Carlo iterations of the computation.

Run existing Monte Carlo code, or, starting from non-Monte-Carlo code, modify code to implement Monte Carlo sampling, iteration, and aggregation of the results from the Monte Carlo iterations of the computation.

1.9x faster execution time than 54k-iteration Monte Carlo, while achieving same fidelity of full distribution result.

1.9x faster execution time than 54k-iteration Monte Carlo, while achieving same fidelity of full distribution result.

1.9x faster execution time than 54k-iteration Monte Carlo, while achieving same fidelity of full distribution result.

Plot of output distribution when running on a Signaloid C0Pro core that provides the stated speedup over Monte Carlo.
Plot of output distribution when running on a Signaloid C0Pro core that provides the stated speedup over Monte Carlo.

Plot (see Note 1 below for details) of output distribution when running on Signaloid C0Pro-M core that provides the 1.9x speedup.

Plot of the output of a Monte Carlo for this use case. This Monte Carlo iteration count provides the same or better Wasserstein distance to ground truth Monte Carlo as execution on a Signaloid C0Pro core, but the Monte Carlo is slower than the Signaloid C0Pro execution.
Plot of the output of a Monte Carlo for this use case. This Monte Carlo iteration count provides the same or better Wasserstein distance to ground truth Monte Carlo as execution on a Signaloid C0Pro core, but the Monte Carlo is slower than the Signaloid C0Pro execution.

Plot (see Note 1 below for details) of the output of an 54k-iteration Monte Carlo for this use case. This Monte Carlo iteration count provides the same or better Wasserstein distance to ground truth (1M-iteration) Monte Carlo as execution on a Signaloid C0Pro-M core (which is 1.9x faster).

Plot of ground truth Monte Carlo to which this analysis compares the Signaloid and reduced-iteration Monte Carlo systems.
Plot of ground truth Monte Carlo to which this analysis compares the Signaloid and reduced-iteration Monte Carlo systems.

Plot (see Note 1 below for details) of ground truth (1M-iteration) Monte Carlo.

Note 1:

The underlying distribution representations are not literal histograms: The distribution plots use an adaptive algorithm to render a mutually-consistent and human-interpretable depiction for both the Signaloid distribution representations and the Monte Carlo samples, to permit qualitative comparison.

Note 2:

Because Monte Carlo works by statistical sampling, each set of multi-iteration Monte Carlo runs (e.g., each time a 200k-iteration Monte Carlo is run) will result in a slightly different final distribution. By contrast, the results from Signaloid's platform are completely deterministic and yield the same distribution each time, for a given Signaloid C0 core type. The performance improvement over Monte Carlo results above show the performance speedup of running on Signaloid's platform, compared to running a Monte Carlo on an AWS r7iz high-performance AWS instance, for the same quality of distribution while accounting for the variations inherent in Monte Carlo. To compare the quality of distribution, we run a large Monte Carlo until convergence (e.g., 1M iterations) and use this as a baseline or ground truth reference for distribution quality (not for performance). We then compare performance of the Signaloid solution against a Monte Carlo iteration count for which the output distributions of 100 out of 100 repetitions are all at smaller Wasserstein distance (than the Signaloid-core-executed algorithm's output distribution) to the output distribution of the baseline reference. Intuitively, this analysis gives the Monte Carlo iteration count that results in an output distribution that is never worse than the Signaloid-core-executed computation's output distribution. 1.9x speedup achieved on a Signaloid C0Pro-M core. Performance data based on Spring 2024 release of Signaloid's technology.