I've read this articles http://www.codeproject.com/Articles/143059/Neural-Network-for-Recognition-of-Handwritten-Di and when I turn to this one: Layer #0: is the gray scale image of the handwritten character in the MNIST database which is padded to 29x29 pixel. There are 29x29= 841 neurons in the input layer. Layer #1: is a convolutional layer with six (6) feature maps. There are 13x13x6 = 1014 neurons, (5x5+1)x6 = 156 weights, and 1014x26 = 26364 connections from layer #1 to the previous layer.
How can we get the six(6) feature maps just from convolution on image ? I think we just get only one feature map. Or am i wrong ?