I'd like to make a ConvNet with the same size of output as one of input. So, I implemented it using TFLearn library. Because I just wanted a simple example satisfying those purpose, I set only one convolution layer with zero-padding for the same output size as the input. The followings are the codes:
X = X.reshape([-1, 400, 400, 1])
Y = Y.reshape([-1, 400, 400, 1])
testX = testX.reshape([-1, 400, 400, 1])
testY = testY.reshape([-1, 400, 400, 1])
X, mean = du.featurewise_zero_center(X)
testX = du.featurewise_zero_center(testX, mean)
# Building a Network
net = tflearn.input_data(shape=[None, 400, 400, 1])
net = tflearn.conv_2d(net, 64, 3, padding='same', activation='relu', bias=False)
sgd = tflearn.SGD(learning_rate=0.1, lr_decay=0.96, decay_step=300)
net = tflearn.regression(net, optimizer='sgd',
loss='categorical_crossentropy',
learning_rate=0.1)
# Training
model = tflearn.DNN(net, checkpoint_path='model_network',
max_checkpoints=10, tensorboard_verbose=3)
model.fit(X, Y, n_epoch=100, validation_set=(testX, testY),
show_metric=True, batch_size=256, run_id='network_test')
However, these codes yield an error
ValueError: Cannot feed value of shape (256, 400, 400) for Tensor u'TargetsData/Y:0', which has shape '(?, 64)'
I've searched and checked some documents but I can't seem to get this work.