1

I have a bayesian network that I call several times to predict value and estimate epistemic uncertainty. I've seen here that it is better to use model(X) than model.predict(X) as first option is faster. In my case, seems actually the opposite... For a monte carlo iteration of 20 here is the behaviour

  • With model(X)

-> lead to OOM failed to allocate memory [Op:AddV2] tensorflow prediction

-> time to perform prediction and other stuff ~[4s-5s]

  • With model.predict(X)

-> No OOM

-> time to perform prediction and other stuff ~[3s-4s]

Here is a part of the code I'm using:

    ...
    mte_carlo_preds = np.array([])
    for _ in range(n_mte_carlo):
        # mte_carlo_preds = np.append(mte_carlo_preds,[self.dict_models[kpi].predict(data_scaled).mean().numpy()[0][0]])
        mte_carlo_preds = np.append(mte_carlo_preds,[self.dict_models[kpi].predict(data_scaled)[0]])

    mu_prediction, std_prediction = mte_carlo_preds.mean(), mte_carlo_preds.std()
    lower_band = mu_prediction - std_prediction
    upper_band = mu_prediction + std_prediction
    ...

The prediction is done on an single sample... Do you have any idea where this behavioir could come from ?

Thanks a lot,

  • can you share data_scaled's shape and dtype? – Rodrigo Laguna Aug 09 '22 at 02:54
  • Checkout [this question](https://stackoverflow.com/questions/60837962/confusion-about-keras-model-call-vs-call-vs-predict-methods) – Rodrigo Laguna Aug 09 '22 at 03:05
  • Hello ! Thanks for your answer. I've already seen this thread, and most of comment seem to concur, but in my case, it is not reallu the case. The shape is (1,21) and dtype is np.float32 I use tensorflow 2.8.0 and tensorflow_probabilities 0.17.0 – user19296578 Aug 10 '22 at 09:55

0 Answers0