I am trying to design a convolution neural network for detecting a small red football ball. I have captured aproxx 4000 pictures of a scene in different configurations (adding chairs, bottles,etc…) without the ball inside and 4000 pictures of the scene in also different configurations but with the ball inside somewhere. I am using the resolution 32x32 px. The ball can be seen visually in picture where present. These are some positive example pictures (here are upside down):
I have tried numerous combination of designing the Convolutional NN but I cannot find a decent one. I will present 2 architectures I have tried (a “normal” size one and very small one). I kept designing small and small networks because it thought I would help me with over-fitting problem. So, I have tried: Normal Network Design
Input: 32x32x3
First Conv Layer:
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 3, 32], stddev=0.1), name=“w1”)
b_conv1 = tf.Variable(tf.constant(0.1, shape=[32]), name=“b1”) _
h_conv1 = tf.nn.relu(tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv1, name=“conv1”)
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=“pool1”)
2nd Conv Layer:
W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 16], stddev=0.1), name=“w2”)
b_conv2 = tf.Variable(tf.constant(0.1, shape=[16]), name=“b2”)
h_conv2 = tf.nn.relu(tf.nn.conv2d(h_pool1, W_conv2, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv2, name=“conv2”)
h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding=‘SAME’, name=“pool2”)
Fully connected layer:
W_fc1 = tf.Variable(tf.truncated_normal([8 * 8* 16, 16], stddev=0.1), name=“w3”)
b_fc1 = tf.Variable(tf.constant(0.1, shape=[16]), name=“b3”)
h_pool2_flat = tf.reshape(h_pool2, [-1, 8816], name=“flat3”)
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1, name=“conv3”)
Dropout
keep_prob = tf.placeholder(tf.float32, name=“keep3”)
h_fc2_drop = tf.nn.dropout(h_fc1, keep_prob, name=“drop3”)
Readout Layer
W_fc3 = tf.Variable(tf.truncated_normal([16, 2], stddev=0.1), name=“w4”)
b_fc3 = tf.Variable(tf.constant(0.1, shape=([2]), name=“b4”) )
y_conv = tf.matmul(h_fc2_drop, W_fc3, name=“yconv”) + b_fc3
Other info
cross_entropy = tf.reduce_mean(
_ tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_conv)+ 0.005 * tf.nn.l2_loss(W_conv1)+ 0.005 * tf.nn.l2_loss(W_fc1) + 0.005 * tf.nn.l2_loss(W_fc3)) _
train_step = tf.train.AdamOptimizer(1e-5,name=“trainingstep”).minimize(cross_entropy)
_#Percentage of correct _
prediction = tf.nn.softmax(y_conv, name=“y_prediction”) _
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1), name=“correct_pred”)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=“acc”)
Parameters
keep_prob: 0.4
batch_size=500
training time in generations=55
Results
Training set final accuracy= 90.2%
Validation set final accuracy= 52.2%
Graph link : Link to accuracy graph
Small Network Design
Input: 32x32x3
First Conv Layer:
W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 3, 16], stddev=0.1), name=“w1”)
_b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]), name=“b1”) _
h_conv1 = tf.nn.relu(tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv1, name=“conv1”)
h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=“pool1”)
Fully connected layer:
W_fc1 = tf.Variable(tf.truncated_normal([16 * 16* 16, 8], stddev=0.1), name=“w3”)
b_fc1 = tf.Variable(tf.constant(0.1, shape=[8]), name=“b3”)
h_pool2_flat = tf.reshape(h_pool1, [-1, 161616], name=“flat3”)
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1, name=“conv3”)
Dropout
keep_prob = tf.placeholder(tf.float32, name=“keep3”)
h_fc2_drop = tf.nn.dropout(h_fc1, keep_prob, name=“drop3”)
Readout Layer
W_fc3 = tf.Variable(tf.truncated_normal([8, 2], stddev=0.1), name=“w4”)
b_fc3 = tf.Variable(tf.constant(0.1, shape=([2]), name=“b4”) )
y_conv = tf.matmul(h_fc2_drop, W_fc3, name=“yconv”) + b_fc3
Other info
cross_entropy = tf.reduce_mean(
_ tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)+ 0.005 * tf.nn.l2_loss(W_conv1)+ 0.005 * tf.nn.l2_loss(W_fc1) + 0.005 * tf.nn.l2_loss(W_fc3)) _
train_step = tf.train.AdamOptimizer(1e-5,name=“trainingstep”).minimize(cross_entropy)
_#Percentage of correct _
prediction = tf.nn.softmax(y_conv, name=“y_prediction”) _
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1), name=“correct_pred”)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=“acc”)
Parameters
keep_prob: 0.4
batch_size=500
training time in generations=55
Results
Training set final accuracy= 87%
Validation set final accuracy= 60.6%
Graph Link to accuracy graph
So, everything I do, I cannot get a decent accuracy on validation test. I am sure that is something that is missing but I cannot identify what. I am using dropout and l2 but it seems to overfit anyway
Thanks for reading and amateur or advanced in CNN, please leave a feedback