1

I want to understand how to generate a low resolution image for a high resolution using convolutional neural networks.

Is it necessary to have a smaller input image on the network and the output is an image twice the size?

I made the following model:

w,h,c=x_train[0].shape


input = Input(shape=(w,h,c),name='LR')
x = UpSampling2D(size=(2,2), name='UP')(input)
h = Dense(720, activation='relu', name ='hide')(x)
h2= Dense(1280, activation='relu', name ='hide2')(h)
output= Dense(3, activation='relu', name ='output')(h2)


model = Model(inputs=input, outputs=output)
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train, epochs=50, verbose=0)

Y_train is twice the size of x_train.

But I get the following error message :

ResourceExhaustedError: OOM when allocating tensor with shape[4608000,720] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
     [[{{node hide/MatMul}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info

What am I doing wrong?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Beto
  • 579
  • 1
  • 5
  • 15
  • You might want to look into the concept of [autoencoders](https://en.wikipedia.org/wiki/Autoencoder). – jaaq Mar 05 '19 at 12:26
  • I did model.fit(x_train,y_train,batch_size=1024, epochs=50, verbose=0) and the result is exceeded 10% of system memory. – Beto Mar 05 '19 at 12:39
  • 1
    By the way, your model is not a CNN. – Dr. Snoopy Mar 05 '19 at 12:41
  • @Matias Valdenegro What is the correct form in this case? – Beto Mar 05 '19 at 12:43
  • With Convolutional layers and no Dense layers. – Dr. Snoopy Mar 05 '19 at 12:44
  • @MatiasValdenegro has a certain point, in that it is never a good idea to ask multiple questions simultaneously (here *how I can do superresoilution* and *I get OOM error*). These are two separate questions, and you should preferably not mix them (obviously my answer below answers only for the OOM part)... – desertnaut Mar 05 '19 at 12:47
  • @desertnaut The OP is likely getting an OOM *because* he is not using a CNN. The number of parameters in this case will just explode. – Dr. Snoopy Mar 05 '19 at 12:49
  • @MatiasValdenegro Agree, but I guess you don't disagree with my argument above... – desertnaut Mar 05 '19 at 12:50
  • @MatiasValdenegro How do I decrease and enlarge the image to the desired size? How do I use Conv2D and Conv2DTranspose in this case? – Beto Mar 05 '19 at 12:50
  • 1
    That's a different question, as @desertnaut mentions you should ask only one question at a time. – Dr. Snoopy Mar 05 '19 at 12:51

1 Answers1

1

Such out-of-memory (OOM) errors are typical of large batch sizes that simply cannot fit into your memory.

I did model.fit(x_train,y_train,batch_size=1024, epochs=50, verbose=0) and the result is exceeded 10% of system memory.

1024 sounds too-large then. Start small (e.g. ~ 64), and then increase gradually in powers of 2 (e.g. 128, 256...) until you get a batch size large enough that can still fit into your memory.

The general discussion in How to calculate optimal batch size might be helpful, too...

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • Modifying the batch_size did not work for me. My network must be wrong. – Beto Mar 05 '19 at 13:02
  • @Beto What exactly do you mean *did not work*? Did the OOM error go away? That was the point of the answer... So, you may want to kindly accept it and open a new question on the *methodology* part (which I see you have [done already](https://stackoverflow.com/questions/55003568/how-to-make-a-cnn-to-enlarge-images)). – desertnaut Mar 05 '19 at 13:12
  • No, and my system (Linux) crashed. The error now is exceeded 10% of system memory and crash my system. – Beto Mar 05 '19 at 13:14