I build a image classification model in R by keras for R.
Got about 98% accuracy, while got terrible accuracy in python.
Keras version for R is 2.1.3, and 2.1.5 in python
following is the R model code:
model=keras_model_sequential()
model=model %>%
layer_conv_2d(filters = 32,kernel_size = c(3,3),padding = 'same',input_shape = c(187,256,3),activation = 'elu')%>%
layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_dropout(.25) %>% layer_batch_normalization() %>%
layer_conv_2d(filters = 64,kernel_size = c(3,3),padding = 'same',activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_dropout(.25) %>% layer_batch_normalization() %>% layer_flatten() %>%
layer_dense(128,activation = 'relu') %>%
layer_dropout(.25)%>%
layer_batch_normalization() %>%
layer_dense(6,activation = 'softmax')
model %>%compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics='accuracy'
)
I try to rebuild a same model in python, with same input data.
While, got totally different performance. The accuracy even less than 30%
Because R keras is calling python for run keras. With same model architecture, they should get similar performance.
I wonder if this issue caused by preprocess, but still show my python code:
model=Sequential()
model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=(187,256,3),padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Conv2D(64, (3, 3), activation='relu',padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Dense(len(label[1]), activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
This is a simple classification. I just do as same as most instruction.
Can not find others faced same problem. So want to ask how it happen and how to solve. Thx