I have a bayesian network that I call several times to predict value and estimate epistemic uncertainty. I've seen here that it is better to use model(X) than model.predict(X) as first option is faster. In my case, seems actually the opposite... For a monte carlo iteration of 20 here is the behaviour
- With model(X)
-> lead to OOM failed to allocate memory [Op:AddV2] tensorflow prediction
-> time to perform prediction and other stuff ~[4s-5s]
- With model.predict(X)
-> No OOM
-> time to perform prediction and other stuff ~[3s-4s]
Here is a part of the code I'm using:
...
mte_carlo_preds = np.array([])
for _ in range(n_mte_carlo):
# mte_carlo_preds = np.append(mte_carlo_preds,[self.dict_models[kpi].predict(data_scaled).mean().numpy()[0][0]])
mte_carlo_preds = np.append(mte_carlo_preds,[self.dict_models[kpi].predict(data_scaled)[0]])
mu_prediction, std_prediction = mte_carlo_preds.mean(), mte_carlo_preds.std()
lower_band = mu_prediction - std_prediction
upper_band = mu_prediction + std_prediction
...
The prediction is done on an single sample... Do you have any idea where this behavioir could come from ?
Thanks a lot,