I am trying to simulate the performance of a real life process. The variables that have been measured historically shows a fixed interval, so been lower o greater that those values is physically impossible.
To simulate the process output, each input variable historical data was represented as the best fit probability distribution, respectively (using this approach: Fitting empirical distribution to theoretical ones with Scipy (Python)?).
However, the resulting theoretical distribution when is simulated n-times do not represent the real life expected min and maximum values. I am thinking to apply a try-except test each simulation to check if each simulated value is between the expected interval, but I am not sure if this is the best way to handle this due to, experimental mean and variance is not achieved.