12

I'm trying to implement sentence similarity architecture based on this work using the STS dataset. Labels are normalized similarity scores from 0 to 1 so it is assumed to be a regression model.

My problem is that the loss goes directly to NaN starting from the first epoch. What am I doing wrong?

I have already tried updating to latest keras and theano versions.

The code for my model is:

def create_lstm_nn(input_dim):
    seq = Sequential()`
    # embedd using pretrained 300d embedding
    seq.add(Embedding(vocab_size, emb_dim, mask_zero=True, weights=[embedding_weights]))
    # encode via LSTM
    seq.add(LSTM(128))
    seq.add(Dropout(0.3))
    return seq

lstm_nn = create_lstm_nn(input_dim)

input_a = Input(shape=(input_dim,))
input_b = Input(shape=(input_dim,))

processed_a = lstm_nn(input_a)
processed_b = lstm_nn(input_b)

cos_distance = merge([processed_a, processed_b], mode='cos', dot_axes=1)
cos_distance = Reshape((1,))(cos_distance)
distance = Lambda(lambda x: 1-x)(cos_distance)

model = Model(input=[input_a, input_b], output=distance)

# train
rms = RMSprop()
model.compile(loss='mse', optimizer=rms)
model.fit([X1, X2], y, validation_split=0.3, batch_size=128, nb_epoch=20)

I also tried using a simple Lambda instead of the Merge layer, but it has the same result.

def cosine_distance(vests):
    x, y = vests
    x = K.l2_normalize(x, axis=-1)
    y = K.l2_normalize(y, axis=-1)
    return -K.mean(x * y, axis=-1, keepdims=True)

def cos_dist_output_shape(shapes):
    shape1, shape2 = shapes
    return (shape1[0],1)

distance = Lambda(cosine_distance, output_shape=cos_dist_output_shape)([processed_a, processed_b])
nemo
  • 55,207
  • 13
  • 135
  • 135
lila
  • 121
  • 1
  • 3
  • Hi, I see you're new to StackOverflow. To help us identify the problem, and to get to the answer you need quicker, is there any additional information you could provide? What errors are you seeing, if any? What did you expect instead? – Hopeful Llama Sep 02 '16 at 09:39
  • well, now I'm trying to figure out why my network becomes **nan** loss while training. – lila Sep 02 '16 at 10:26
  • Maybe your learning rate is too high. Maybe there is another problem. If you are using Theano you can use [`THEANO_FLAGS='mode=NanGuardMode'`](http://deeplearning.net/software/theano/tutorial/modes.html) when starting your script to have it throw an exception where a `nan` value is detected, giving you a traceback to the location of the issue. – nemo Sep 02 '16 at 19:53

2 Answers2

3

The nan is a common issue in deep learning regression. Because you are using Siamese network, you can try followings:

  1. check your data: do they need to be normalized?
  2. try to add an Dense layer into your network as the last layer, but be careful picking up an activation function, e.g. relu
  3. try to use another loss function, e.g. contrastive_loss
  4. smaller your learning rate, e.g. 0.0001
  5. cos mode does not carefully deal with division by zero, might be the cause of NaN

It is not easy to make deep learning work perfectly.

Kun
  • 581
  • 1
  • 5
  • 27
0

I didn't run into the nan issue, but my loss wouldn't change. I found this info check this out

def cosine_distance(shapes):
    y_true, y_pred = shapes
    def l2_normalize(x, axis):
        norm = K.sqrt(K.sum(K.square(x), axis=axis, keepdims=True))
        return K.sign(x) * K.maximum(K.abs(x), K.epsilon()) /     K.maximum(norm, K.epsilon())
    y_true = l2_normalize(y_true, axis=-1)
    y_pred = l2_normalize(y_pred, axis=-1)
    return K.mean(1 - K.sum((y_true * y_pred), axis=-1))
JAB
  • 12,401
  • 6
  • 45
  • 50