I have been given a task of translating the simulations inside of the Excel plug-in @Risk to Python. The functionalities closely line up with numpy's random number simulation given a distribution type and mu, sigma, or high and low values. An example for what I am doing is here.
In the linked example, mu=2 and sigma=1. Using numpy I get the same distribution as @Risk.
dist = np.random.lognormal(2, 1, 1000)
However, when I use numpy with the following parameters - I can no longer replicate the @Risk distributions.
mu=0.4, sigma=0.16 in @Risk: Histogram for 1000 rsamples
and in Python: histogram for 1000 rsamples
The result is two completely different distributions for the same mu and sigma. So I am very confused now on what numpy is expecting for mu and sigma inputs. I've read through the docs linked here, but why would one set of parameters give me the matching distributions, and another set of values will not.
What am I missing here?