Code -
def define_model():
# channel 1
inputs1 = Input(shape=(32,1))
conv1 = Conv1D(filters=256, kernel_size=2, activation='relu')(inputs1)
#bat1 = BatchNormalization(momentum=0.9)(conv1)
pool1 = MaxPooling1D(pool_size=2)(conv1)
flat1 = Flatten()(pool1)
# channel 2
inputs2 = Input(shape=(32,1))
conv2 = Conv1D(filters=256, kernel_size=4, activation='relu')(inputs2)
pool2 = MaxPooling1D(pool_size=2)(conv2)
flat2 = Flatten()(pool2)
# channel 3
inputs3 = Input(shape=(32,1))
conv3 = Conv1D(filters=256, kernel_size=4, activation='relu')(inputs3)
pool3 = MaxPooling1D(pool_size=2)(conv3)
flat3 = Flatten()(pool3)
# channel 4
inputs4 = Input(shape=(32,1))
conv4 = Conv1D(filters=256, kernel_size=6, activation='relu')(inputs4)
pool4 = MaxPooling1D(pool_size=2)(conv4)
flat4 = Flatten()(pool4)
# merge
merged = concatenate([flat1, flat2, flat3, flat4])
# interpretation
dense1 = Dense(128, activation='relu')(merged)
dense2 = Dense(96, activation='relu')(dense1)
outputs = Dense(10, activation='softmax')(dense2)
model = Model(inputs=[inputs1, inputs2, inputs3, inputs4 ], outputs=outputs)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[categorical_accuracy])
plot_model(model, show_shapes=True, to_file='/content/q.png')
return model
model_concat = define_model()
# fit model
print()
red_lr= ReduceLROnPlateau(monitor='val_loss',patience=2,verbose=2,factor=0.001,min_delta=0.01)
check=ModelCheckpoint(filepath=r'/content/drive/My Drive/Colab Notebooks/gen/concatcnn.hdf5', verbose=1, save_best_only = True)
History = model_concat.fit([X_train, X_train, X_train, X_train], y_train , epochs=20, verbose = 1 ,validation_data=([X_test, X_test, X_test, X_test], y_test), callbacks = [check, red_lr], batch_size = 32)
model_concat.summary
Unfortunately, I used binary crossentropy as loss and 'accuracy' as metrics. I got above 90% of val_accuracy.
Then, I found this link, Keras binary_crossentropy vs categorical_crossentropy performance?.
After reading the first answer, I used binary crossentropy as loss and categorical crossentropy as metrics...
Even though, I Changed this, the val_acc is not improving, it shows around 62%. what to do...
I minimised the model complexity to learn the data, but the accuracy is not improving. Am I miss anything...
Data set shape, x_train is (800,32) y_train is (200,32) y_train is (800,10) and y_test is (200, 10). Before fed into Network, I used standard scalar in x.and changed the x_train and, x_test shape to (800, 32, 1) and, (200, 32, 1).
Thanks