0

So I loaded some images of resolution 1024x1024 into a list of tensors, and then used the function set_shape to change the shape of every tensor of the list to [128,128,3]: see code example here

However, when I call eval() and check the shape of the image coming from the tensor, it says that the shape is [1024,1024,3]. see code example here

Then why didn't set_shape throw an error?

  • https://stackoverflow.com/help/how-to-ask is a guide on how to ask questions in a stack overflow. One of the things is not to use images of the code but put the code in the question so that people can see it. Hope this helps, and it may get you better answers... – Paul Brennan Dec 03 '20 at 21:41

1 Answers1

0

I think you may be using set_shape() incorrectly. See Clarification on tf.Tensor.set_shape().

"One analogy is that tf.set_shape() is like a run-time cast in an object-oriented language like Java. For example, if you have a pointer to an Object but know that, in fact, it is a String, you might do the cast (String) obj in order to pass obj to a method that expects a String argument. However, if you have a String s and try to cast it to a java.util.Vector, the compiler will give you an error, because these two types are unrelated."

Basically this function provides a shape definition for a tensor that has an unspecified shape (I think). It does not resize a tensor into a tensor with a different number of elements, so if you attempt to use it this way it will fail. So why doesn't it throw an error? I'm not 100% sure but I'd guess it has a similar reason to why casting an integer as a string doesn't cause an error. It is a valid operation, but may cause issues later in your code when you attempt to use the variable assuming that the shape is actually as specified during your set_shape() call.

Consider using tf.image.resize() instead. This function is designed to do what you are attempting to do here. It accepts a batch of images and returns a resized batch.

new_list = tf.image.resize(lista,[128,128])
DerekG
  • 3,555
  • 1
  • 11
  • 21
  • Thanks. I am asking this question because I trained a network with 1024x1024 images, without changing the value of 128 (I didn't know I had to change it, so it was an oversight). However the training worked, and I wanted to understand if it was reliable or not. I don't understand if the images I am loading are changing or not (for example, if they are resized because of the different shape when loaded). One strange thing that happened was that I tried to change the set_shape to `([1024, 1024, 3])` and the training was giving me memory errors, while it wasn't with `([128, 128, 3])` – neuralnetslover Dec 04 '20 at 17:32
  • And this is why `tf.image.resize()` wouldn't help me in this case. – neuralnetslover Dec 04 '20 at 17:33