1

Hello I am trying to train a small model using tf.keras. with tf 2.2.0, i'm using a generator which returns sequences of [5,120,32,64,9] and labels [5,120,1] and I'm importing from tf.keras

from tensorflow.keras.metrics import Recall, Precision, Metric

Additionally I am adding them into the compile and fit section

model.compile(
                loss="mse",
                optimizer=Adam(learning_rate=self.learning_rate),
                metrics=[Recall(), Precision()],
                sample_weight_mode="temporal",
            )

if callbacks is None:
            callbacks = []

model.fit(
            data.training(),
            callbacks=callbacks,
            steps_per_epoch=epoch_size,
            epochs=epochs,
            validation_data=data.training(),
            validation_steps=validation_size,
            verbose=0,
        )

(I'm conscious that I'm using training as training data and validation data. I'm trying to find a bug in my code or in TF since we get strange and strong changes in results in recall and precision w.r.t validation. It never converges and produces extreme changes for example from 0 - 0.8 - 0.2 - 0.9 - 0.4 - 0.8 ...)

Additionally I'm using a generator which yields tuples of inputs and outputs, since that "corrected the problem"

however I'm still having results with precision and recall 0.00000

100/100 [==============================] - 224s 2s/step - loss: 0.0371 - recall: 0.0000e+00 - precision: 0.0000e+00 - val_loss: 0.0331 - val_recall: 0.0000e+00 - val_precision: 0.0000e+00

Does anyone know any other trick to use in tf 2.2 that I can use in order to solve that problem?

a summary of my NN is the following:

Layer (type)                 Output Shape              Param #   
=================================================================
input (InputLayer)           [(None, None, 32, 64, 9)] 0         
_________________________________________________________________
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 30, 62, 20)  20960     
_________________________________________________________________
time_distributed_MP_1 (TimeD (None, None, 15, 31, 20)  0         
_________________________________________________________________
time_distributed_BN_1 (TimeD (None, None, 15, 31, 20)  80        
_________________________________________________________________
time_distributed_F (TimeDist (None, None, 9300)        0         
_________________________________________________________________
time_distributed_D1 (TimeDis (None, None, 32)          297632    
_________________________________________________________________
time_distributed (TimeDistri (None, None, 32)          0         
_________________________________________________________________
time_distributed_D2 (TimeDis (None, None, 24)          792       
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 24)          0         
_________________________________________________________________
time_distributed_D3 (TimeDis (None, None, 16)          400       
_________________________________________________________________
time_distributed_2 (TimeDist (None, None, 16)          0         
_________________________________________________________________
output (TimeDistributed)     (None, None, 1)           17        
=================================================================
  • Any chance you were able to solve this? Im facing a similar issue with tf2.2 where the precision/recall are 0.0000 for validation... – user3741951 Nov 04 '20 at 14:23

1 Answers1

2

This was happening to me and I finally figured out why. My data was ordered. So for example all my negative samples were at the end of the array and all my positive at the beginning. So when the Neural Network was training at the beginning it would only find negative samples of the class.

Chapin
  • 169
  • 1
  • 7