3

I have the following architecture of a Convolutional Neural Network in matconvnet which I use to train on my own data:

function net = cnn_mnist_init(varargin)
% CNN_MNIST_LENET Initialize a CNN similar for MNIST
opts.batchNormalization = false ;
opts.networkType = 'simplenn' ;
opts = vl_argparse(opts, varargin) ;

f= 0.0125 ;
net.layers = {} ;
net.layers{end+1} = struct('name','conv1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,1,64, 'single'), zeros(1, 64, 'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool1',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0);
net.layers{end+1} = struct('name','conv2',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,64,128, 'single'),zeros(1,128,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool2',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 2, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv3',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,128,256, 'single'),zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool3',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv4',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,256,512, 'single'),zeros(1,512,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool4',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','ip1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,256, 'single'),  zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','relu',...
                           'type', 'relu');
net.layers{end+1} = struct('name','classifier',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,2, 'single'), zeros(1,2,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','loss',...
                           'type', 'softmaxloss') ;

% optionally switch to batch normalization
if opts.batchNormalization
  net = insertBnorm(net, 1) ;
  net = insertBnorm(net, 4) ;
  net = insertBnorm(net, 7) ;
  net = insertBnorm(net, 10) ;
  net = insertBnorm(net, 13) ;
end

% Meta parameters
net.meta.inputSize = [28 28 1] ;
net.meta.trainOpts.learningRate = [0.01*ones(1,10) 0.001*ones(1,10) 0.0001*ones(1,10)];
disp(net.meta.trainOpts.learningRate);
pause;
net.meta.trainOpts.numEpochs = length(net.meta.trainOpts.learningRate) ;
net.meta.trainOpts.batchSize = 256 ;
net.meta.trainOpts.momentum = 0.9 ;
net.meta.trainOpts.weightDecay = 0.0005 ;

% --------------------------------------------------------------------
function net = insertBnorm(net, l)
% --------------------------------------------------------------------
assert(isfield(net.layers{l}, 'weights'));
ndim = size(net.layers{l}.weights{1}, 4);
layer = struct('type', 'bnorm', ...
               'weights', {{ones(ndim, 1, 'single'), zeros(ndim, 1, 'single')}}, ...
               'learningRate', [1 1], ...
               'weightDecay', [0 0]) ;
net.layers{l}.biases = [] ;
net.layers = horzcat(net.layers(1:l), layer, net.layers(l+1:end)) ;

What I want to do is building the same architecture in Keras, that is what I tried so far:

model = Sequential()

model.add(Conv2D(64, (3, 3), strides=1, input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(128, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(Conv2D(256, (3, 3), strides=1))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(512, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=1))

model.add(Conv2D(256, (1, 1)))
convout1=Activation('relu')
model.add(convout1)

model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

opt = keras.optimizers.rmsprop(lr=0.0001, decay=0.0005)  
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['binary_accuracy'])

However, when I run the matconvnet network I have 87% accuracy and if I run the keras version I have 77% accuracy. If they are supposed to be the same network and the data is the same, where is the difference? What is wrong in my Keras architecture?

mad
  • 2,677
  • 8
  • 35
  • 78

1 Answers1

2

In your MatConvNet version, you use SGD with momentum.

In Keras, you use rmsprop

With a different learning rule you should try different learning rates. Also sometimes momentum is helpful when training a CNN.

Could you try the SGD+momentum in Keras and let me know what happens?

Another thing that might be different is that the initialization. for example in MatConvNet you use gaussian initialization with f= 0.0125 as the standard deviation. In Keras I'm not sure about the default initialization.

In general, if you don't use batch normalization, the network is prone to many numerical issues. If you use batch normalization in both networks, I bet the results would be similar. Is there any reason you don't want to use batch normalization?

DataHungry
  • 351
  • 2
  • 9
  • I tried this sgd = keras.optimizers.SGD(lr=0.0001, decay=0.0005, momentum=0.9) but the result got even worse: 50%. Do you believe the architecture of both networks is the same? – mad Jul 04 '17 at 02:38
  • Thanks @DataHungry, what is the corresponding Gaussian initialization of filters in KERAS? – mad Jul 05 '17 at 02:20
  • 1
    @mad it should be the RandomNormal initializer in https://keras.io/initializers/ But my suggestion is still to use batch normalization in both networks. – DataHungry Jul 05 '17 at 06:24
  • 1
    @mad did you use any data augmentation (shifting of image) in MatConvNet but not in Keras? – DataHungry Jul 05 '17 at 06:29
  • No data augmentation. I will try your suggestions and return as soon as possible. Thank you so much! – mad Jul 06 '17 at 01:48