
Benchmarking
Using the Heath-Jarrow-Morton (HJM) Framework for Pricing a Portfolio of Swaptions
This use case uses the Heath-Jarrow-Morton (HJM) framework to price a basket of swaptions. Traditional implementations use Monte Carlo simulation, typically with thousands to hundreds of thousands of iterations.
When running on the Signaloid Compute Engine, we can replace the use of individual random variates drawn from some distribution with a direct computation on a representation of the probability distribution across paths. This allows the code kernel running on the Signaloid Compute Engine to compute the same kind of distribution, in a single pass over the time steps, as the Monte Carlo simulation across the thousands or hundreds of thousands of paths.
An C benchmark implementation that uses the HJM framework to price a basket of swaptions, running on the Signaloid Cloud Platform with single-threaded Signaloid C0Pro-M+ core selected, runs 583x faster than an already fast C-language Monte-Carlo-based implementation of the same model running on an Amazon AWS r7iz high-performance server instance.
The underlying distribution representations are not literal histograms: The distribution plots use an adaptive algorithm to render a mutually-consistent and human-interpretable depiction for both the Signaloid distribution representations and the Monte Carlo samples, to permit qualitative comparison.
Plot of output distribution when running on Signaloid C0Pro-M+ core that provides the 583x speedup.
Plot of the output of an 22.2M-iteration Monte Carlo for this use case. This Monte Carlo iteration count provides the same or better Wasserstein distance to ground truth (large, converged) Monte Carlo as execution on a Signaloid C0Pro-M+ core (which is 583x faster).
Plot of ground truth (20M-iteration) Monte Carlo.
Benchmarking Methodology
Monte Carlo simulations work by statistical sampling and therefore each multi-iteration Monte Carlo run will result in a slightly different output distribution. By contrast, Signaloid's platform is deterministic and each run produces the same distribution for a given Signaloid C0 core type.
The performance improvements are calculated by comparing Signaloid's platform with a Monte Carlo simulation of a similar quality of distribution. First, we run a large Monte Carlo simulation (about 50M iterations) on an AWS r7iz high-performance AWS instance: We use this distribution result as a baseline or ground truth reference of distribution quality. Then we calculate the performance of Signaloid's technology, and compare it with the performance of a Monte Carlo iteration count where the output distribution's Wasserstein distance (to the output distribution of the ground truth reference) is as accurate as the Signaloid-core-executed algorithm's output distribution, with 95% confidence level.
Performance data based on Fall 2025 release of Signaloid's technology.


