I am comparing the speed of a Monte Carlo pricing algorithm for a Vanilla call option between Matlab and C++. This is not the same as Why is MATLAB so fast in matrix multiplication? since the speed-up is not due to matrix multiplication (there is only a dot product which is done quickly) but seems to be due to its highly efficient Gaussian random number generator.
In Matlab the code has been vectorised and the code looks as follows
function [ value ] = OptionMCValue( yearsToExpiry, spot, strike, riskFreeRate, dividendYield, volatility, numPaths )
sd = volatility*sqrt(yearsToExpiry);
sAdjusted = spot * exp( (riskFreeRate - dividendYield - 0.5*volatility*volatility) * yearsToExpiry);
g = randn(1,numPaths);
sT = sAdjusted * exp( g * sd );
values = max(sT-strike,0);`
value = mean(values);
value = value * exp(-riskFreeRate * yearsToExpiry);
end
If I run this with 10 million paths as follows
strike = 100.0;
yearsToExpiry = 2.16563;
spot = 100.0;
volatility = 0.20;
dividendYield = 0.03;
riskFreeRate = 0.05;
oneMillion = 1000000;
numPaths = 10*oneMillion;
tic
value = OptionMCValue( yearsToExpiry, spot, strike, riskFreeRate, dividendYield, volatility, numPaths );
toc
I get
Elapsed time is 0.359304 seconds.
12.8311
Now I do the same thing in C++ in VS2013
My code is in an OptionMC class and is as follows
double OptionMC::value(double yearsToExpiry,
double spot,
double riskFreeRate,
double dividendYield,
double volatility,
unsigned long numPaths )
{
double sd = volatility*sqrt(yearsToExpiry);
double sAdjusted = spot * exp( (riskFreeRate - dividendYield - 0.5*volatility*volatility) * yearsToExpiry);
double value = 0.0;
double g, sT;
for (unsigned long i = 0; i < numPaths; i++)
{
g = GaussianRVByBoxMuller();
sT = sAdjusted * exp(g * sd);
value += Max(sT - m_strike, 0.0);
}
value = value * exp(-riskFreeRate * yearsToExpiry);
value /= (double) numPaths;
return value;
}
The BM code is as follows
double GaussianRVByBoxMuller()
{
double result;
double x; double y;;
double w;
do
{
x = 2.0*rand() / static_cast<double>(RAND_MAX)-1;
y = 2.0*rand() / static_cast<double>(RAND_MAX)-1;
w = x*x + y*y;
} while (w >= 1.0);
w = sqrt(-2.0 * log(w) / w);
result = x*w;
return result;
}
I have set the Optimization option to Optimize for speed in Visual Studio.
For 10m paths it takes 4.124 seconds.
This is 11 times slower than Matlab.
Can anyone explain the difference ?
EDIT: On further testing the slow down does seem to be the call to GaussianRVByBoxMuller. Matlab seems to have a very efficient implementation - the Ziggurat method. Note that BM is sub-optimal here as it generates 2 RVs and I only use 1. Fixing just this would give a factor of 2 speed-up.