1

I am building a multiple output keras model

model1 = Model(input=ip, output=[main, aux])
model1.compile(optimizer='sgd', loss={'main':cutom_loss, 'aux':'mean_squared error'}, metrics='accuracy')

model1.fit(input_data, [main_output, aux_output], nb_epoch=epochs, batch_size=batch_size, verbose=2, shuffle=True, validation_split=0.1, callbacks=[checkpointer])

My custom_loss function:`

def custom_loss(y_true, y_pred):
    main_pred = y_pred[0]
    main_true = y_true[0]

    loss = K.mean(K.square(main_true - main_pred), axis=-1)
    return loss

But my network is not converging

Epoch 1/10
Epoch 00000: val_loss improved from inf to 0.39544, saving model to ./testAE/testAE_best_weights.h5
18s - loss: 0.3896 - main_loss: 0.0449 - aux_loss: 0.3446 - main_acc: 0.0441 - val_loss: 0.3954 - val_main_loss: 0.0510 - val_aux_loss: 0.3445 - val_main_acc: 0.0402
Epoch 2/10
Epoch 00001: val_loss did not improve
18s - loss: 0.3896 - main_loss: 0.0449 - aux_loss: 0.3446 - main_acc: 0.0441 - val_loss: 0.3954 - val_main_loss: 0.0510 - val_aux_loss: 0.3445 - val_main_acc: 0.0402
Epoch 3/10
Epoch 00002: val_loss did not improve
18s - loss: 0.3896 - main_loss: 0.0449 - aux_loss: 0.3446 - main_acc: 0.0441 - val_loss: 0.3954 - val_main_loss: 0.0510 - val_aux_loss: 0.3445 - val_main_acc: 0.0402
Epoch 4/10
Epoch 00003: val_loss did not improve
18s - loss: 0.3896 - main_loss: 0.0449 - aux_loss: 0.3446 - main_acc: 0.0441 - val_loss: 0.3954 - val_main_loss: 0.0510 - val_aux_loss: 0.3445 - val_main_acc: 0.0402

I only wants to train on the main output. The aux output will be using for testing.

shaaa
  • 503
  • 1
  • 4
  • 16
  • why are you using a custom loss?... It seems like you are calculating the mean squared error anyway, so why not use that? – J.Down May 27 '17 at 04:00

1 Answers1

0

It is unclear to me from the information provided why your loss isn't improving but I can address part of your question. I'm also confused why you're interested in the accuracy metric while using mean squared error, but I don't know the specifics of your model.

See this question for a simply way to train on just one of your outputs (and also an explanation for how outputs/labels are passed to loss functions). You can make your model train on just one output using loss_weights=[1., 0.0] when you compile your model. That way the loss that is optimized doesn't include the auxiliary output. That would look like this:

model1.compile(optimizer='sgd', loss={'main':custom_loss, 'aux':'mean_squared error'}, 
               metrics='accuracy', loss_weights=[1., 0.0])

Since you're just computing mean squared error it would be simpler to rewrite your code as

model1.compile(optimizer='sgd', loss={'main':'mse', 'aux':'mean_squared error'},
               metrics='accuracy', loss_weights=[1., 0.0])
Josh Ziegler
  • 436
  • 4
  • 9