Signaloid Cloud Compute Engine

Use Case Family:

Quantitative Finance

### Use Case

# Path-Dependent Pricing Stochastic Processes with Arithmetic Brownian Motion

Modeling the price evolution of a financial instrument in the presence of fluctuating market conditions is a critical task for many financial institutions. This use case implements a numerical solution of the stochastic differential equation (SDE) for a arithmetic brownian motion (ABM) process. Numerical solution of the arithmetic brownian motion SDE is often done using a Monte Carlo simulation over a set of paths and across a number of steps. The number of paths is typically thousands to hundreds of thousands and the number of steps is typically the number of stock market trading days in a year (252), or, for certain model scenarios that permit certain simplifications, a single time step is used.

When running on the Signaloid Compute Engine, the kernels implementing the ABM SDE can replace their use of individual samples from some distribution for each path, with a direct computation on a representation of the probability distribution across paths. This allows the code kernel running on the Signaloid Compute Engine to compute the same kind of distribution, in a single pass over the time steps, as the Monte Carlo simulation across the thousands or hundreds of thousands of paths.

For the ABM SDE, an implementation on the Signaloid C0 processor runs 2.8x faster than an already fast C-language Monte-Carlo-based implementation of the same model running on an AWS r7iz high-performance instance.

### Note 1:

The underlying distribution representations are not literal histograms: The distribution plots use an adaptive algorithm to render a mutually-consistent and human-interpretable depiction for both the Signaloid distribution representations and the Monte Carlo samples, to permit qualitative comparison.

### Note 2:

Because Monte Carlo works by statistical sampling, each set of multi-iteration Monte Carlo runs (e.g., each time a 200k-iteration Monte Carlo is run) will result in a slightly different final distribution. By contrast, the results from Signaloid's platform are completely deterministic and yield the same distribution each time, for a given Signaloid C0 core type. The performance improvement over Monte Carlo results above show the performance speedup of running on Signaloid's platform, compared to running a Monte Carlo on an AWS r7iz high-performance AWS instance, for the same quality of distribution while accounting for the variations inherent in Monte Carlo. To compare the quality of distribution, we run a large Monte Carlo until convergence (e.g., 1M iterations) and use this as a baseline or ground truth reference for distribution quality (not for performance). We then compare performance of the Signaloid solution against a Monte Carlo iteration count for which the output distributions of 100 out of 100 repetitions are all at smaller Wasserstein distance (than the Signaloid-core-executed algorithm's output distribution) to the output distribution of the baseline reference. Intuitively, this analysis gives the Monte Carlo iteration count that results in an output distribution that is never worse than the Signaloid-core-executed computation's output distribution. 2.8x speedup achieved on a Signaloid C0Pro-L core. Performance data based on Spring 2024 release of Signaloid's technology.