I am trying to train a single hidden layer Keras sequential multi-output (two outputs) "regression" network on non-image data. I would like each output to have separate activation function assigned to it. This is the form of my model construction: Is this the correct way to set it up for two output neurons each as a vector?
n_epoch = 1000
hidden_layer1_neurons = 100
learning_rate =0.01
batch_size = 2000 # 10000
activation_1 = 'relu' # Activation function for Hidden Layer Neurons
activation_2 = 'tanh' # Activation function for Output Layer Neuron 1
activation_3 = 'sigmoid' # Activation function for Output Layer Neuron
2
min_lr = 0.008
lr_reduce_patience = 10
earlystop_patience = 100
earlystop_min_delta = 0.0001
model = Sequential()
input_layer = model.add(Dense(units=hidden_layer1_neurons, input_dim=
input_dim,
kernel_initializer = initializers.RandomNormal(mean=0.0,
stddev=0.05)))
model.add(Activation(activation_1))
model.add(Dense(units=2))
model.add(Activation(activation_2))
model.add(Activation(activation_3))
model.compile(loss='mean_squared_error',
optimizer=Adam(lr=learning_rate), metrics=['mae'])
And this is my training cell:
hist = model.fit_generator(generator=training_generator,validation_data =
validation_generator, epochs = n_epoch,
use_multiprocessing=False, callbacks = [reduce, earlystop,checkpointer,
csv_logger])