2

Im unable to reshape tensor loaded from my own custom dataset. As shown below ds_train has batch size of 8 and I want to reshape it such as: len(ds_train),128*128. So that I can feed the batch to my keras autoencoder model. Im new to TF and couldnt find solutions online, thus posting here.

ds_train = tf.keras.preprocessing.image_dataset_from_directory(

    directory=healthy_path,
    labels="inferred",
    label_mode=None,
    color_mode="grayscale",
    batch_size=8,
    image_size=(128, 128),
    shuffle=True,
    seed=123,
    validation_split=0.05,
    subset="training",)

Similarly my model is based on TF2 Functional API as Follows:

inputs = keras.Input(shape=(128*128))
norm = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
encode = layers.Dense(14, activation='relu', name='encode')(norm)
coded = layers.Dense(3, activation='relu', name='coded')(encode)
decode = layers.Dense(14, activation='relu', name='decode')(coded)
decoded = layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)

My attempt at reshaping

ds_train = tf.reshape(ds_train, shape=[-1])
ds_validation = tf.reshape(ds_train, shape=[-1])
#AUTOTUNE = tf.data.experimental.AUTOTUNE
#ds_train = ds_train.cache().prefetch(buffer_size=AUTOTUNE)
#ds_validation = ds_validation.cache().prefetch(buffer_size=AUTOTUNE)

Error :

ValueError: Attempt to convert a value (<BatchDataset shapes: (None, 128, 128, 1), types: tf.float32>) with an unsupported type (<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>) to a Tensor.

Entire Error Callstack:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-17-764a960c83e5> in <module>
----> 1 ds_train = tf.reshape(ds_train, shape=[-1])
      2 ds_validation = tf.reshape(ds_train, shape=[-1])
      3 #AUTOTUNE = tf.data.experimental.AUTOTUNE
      4 #ds_train = ds_train.cache().prefetch(buffer_size=AUTOTUNE)
      5 #ds_validation = ds_validation.cache().prefetch(buffer_size=AUTOTUNE)

C:\Anaconda3\lib\site-packages\tensorflow\python\util\dispatch.py in wrapper(*args, **kwargs)
    199     """Call target, and fall back on dispatchers if there is a TypeError."""
    200     try:
--> 201       return target(*args, **kwargs)
    202     except (TypeError, ValueError):
    203       # Note: convert_to_eager_tensor currently raises a ValueError, not a

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py in reshape(tensor, shape, name)
    193     A `Tensor`. Has the same type as `tensor`.
    194   """
--> 195   result = gen_array_ops.reshape(tensor, shape, name)
    196   tensor_util.maybe_set_static_shape(result, shape)
    197   return result

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in reshape(tensor, shape, name)
   8227     try:
   8228       return reshape_eager_fallback(
-> 8229           tensor, shape, name=name, ctx=_ctx)
   8230     except _core._SymbolicException:
   8231       pass  # Add nodes to the TensorFlow graph.

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in reshape_eager_fallback(tensor, shape, name, ctx)
   8247 
   8248 def reshape_eager_fallback(tensor, shape, name, ctx):
-> 8249   _attr_T, (tensor,) = _execute.args_to_matching_eager([tensor], ctx)
   8250   _attr_Tshape, (shape,) = _execute.args_to_matching_eager([shape], ctx, _dtypes.int32)
   8251   _inputs_flat = [tensor, shape]

C:\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in args_to_matching_eager(l, ctx, default_dtype)
    261       ret.append(
    262           ops.convert_to_tensor(
--> 263               t, dtype, preferred_dtype=default_dtype, ctx=ctx))
    264       if dtype is None:
    265         dtype = ret[-1].dtype

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
   1497 
   1498     if ret is None:
-> 1499       ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
   1500 
   1501     if ret is NotImplemented:

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
    336                                          as_ref=False):
    337   _ = as_ref
--> 338   return constant(v, dtype=dtype, name=name)
    339 
    340 

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name)
    262   """
    263   return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 264                         allow_broadcast=True)
    265 
    266 

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
    273       with trace.Trace("tf.constant"):
    274         return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 275     return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
    276 
    277   g = ops.get_default_graph()

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
    298 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
    299   """Implementation of eager constant."""
--> 300   t = convert_to_eager_tensor(value, ctx, dtype)
    301   if shape is None:
    302     return t

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
     96       dtype = dtypes.as_dtype(dtype).as_datatype_enum
     97   ctx.ensure_initialized()
---> 98   return ops.EagerTensor(value, ctx.device_name, dtype)
     99 
    100 
calculusnoob
  • 165
  • 1
  • 11

1 Answers1

0

Try changing the shape inside the neural net:

inputs = keras.Input(shape=(128, 128, 1))
flat = keras.layers.Flatten()(inputs)

This would work:

import numpy as np
import tensorflow as tf

x = np.random.rand(10, 128, 128, 1).astype(np.float32)

inputs = tf.keras.Input(shape=(128, 128, 1))
flat = tf.keras.layers.Flatten()(inputs)
encode = tf.keras.layers.Dense(14, activation='relu', name='encode')(flat)
coded =  tf.keras.layers.Dense(3, activation='relu', name='coded')(encode)
decode = tf.keras.layers.Dense(14, activation='relu', name='decode')(coded)
decoded =tf.keras.layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)

model = tf.keras.Model(inputs=inputs, outputs=decoded)

model.build(input_shape=x.shape)  # remove this, it's just for demonstrating

model(x)  # remove this, it's just for demonstrating
<tf.Tensor: shape=(10, 16384), dtype=float32, numpy=
array([[0.50187236, 0.4986383 , 0.50084716, ..., 0.4998364 , 0.50000435,
        0.4999416 ],
       [0.5020216 , 0.4985297 , 0.5009147 , ..., 0.4998234 , 0.5000047 ,
        0.49993694],
       [0.50179213, 0.49869663, 0.50081086, ..., 0.49984342, 0.5000042 ,
        0.4999441 ],
       ...,
       [0.5021732 , 0.49841946, 0.50098324, ..., 0.49981016, 0.50000507,
        0.49993217],
       [0.50205255, 0.49843505, 0.5009038 , ..., 0.49979147, 0.4999932 ,
        0.49991176],
       [0.50192004, 0.49860355, 0.50086874, ..., 0.49983227, 0.5000045 ,
        0.4999401 ]], dtype=float32)>

Note that I removed the rescaling layer, I don't have it in my Tensorflow version. You can put it right back.

Nicolas Gervais
  • 33,817
  • 13
  • 115
  • 143
  • Getting this error. `ValueError: Input 0 of layer encode is incompatible with the layer: expected axis -1 of input shape to have value 16384 but received input with shape [None, 128, 128, 1]` – calculusnoob Dec 10 '20 at 13:52
  • I think you might have forgotten to change the input shape in your input layer – Nicolas Gervais Dec 10 '20 at 13:57
  • This is what my model looks like. `inputs = tf.keras.Input(shape=(128, 128, 1)) flat = tf.keras.layers.Flatten()(inputs) norm = layers.experimental.preprocessing.Rescaling(1./255)(flat) . (removed 2 lines as character limitations) . decoded = layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)` Error now is: `ValueError: No gradients provided for any variable: ['encode/kernel:0', 'encode/bias:0', 'coded/kernel:0', 'coded/bias:0', 'decode/kernel:0', 'decode/bias:0', 'decoded/kernel:0', 'decoded/bias:0']` – calculusnoob Dec 10 '20 at 14:09
  • Are you calling the layer on the right tensor? – Nicolas Gervais Dec 10 '20 at 14:15
  • Yeah I think so. I dont understand whats wrong. `autoencoder.fit(x=ds_train, epochs=200, verbose=2)` – calculusnoob Dec 10 '20 at 14:18
  • That's the model. When you define the layers, do you have the previous layer inside the parentheses? E.g., `flat = tf.keras.layers.Flatten()(inputs)`, where `inputs` is the output of the previous layer. – Nicolas Gervais Dec 10 '20 at 14:22
  • Yes, all layers have previous layers inside the parentheses `inputs = tf.keras.Input(shape=(128, 128, 1)) flat = tf.keras.layers.Flatten()(inputs)` – calculusnoob Dec 10 '20 at 14:26
  • So what error are you facing now? If you copy/paste my code and don't change a thing, does it run? – Nicolas Gervais Dec 10 '20 at 14:28
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/225778/discussion-between-calculusnoob-and-nicolas-gervais). – calculusnoob Dec 10 '20 at 14:29
  • Error as before(occurs when I fit to model) : `ValueError: No gradients provided for any variable: ['encode/kernel:0', 'encode/bias:0', 'coded/kernel:0', 'coded/bias:0', 'decode/kernel:0', 'decode/bias:0', 'decoded/kernel:0', 'decoded/bias:0']` – calculusnoob Dec 10 '20 at 14:34