1

I am extremely new to Tensorflow hence I won't be sure exactly what will you need to solve my issue. So do let me know if you need any additional information.

Basically I'm trying to run images through Sequential. Based on the tutorial on https://www.tensorflow.org/tutorials/images/classification, I am trying to plug and play onto my own dataset.

I'm currently stuck at the running my model using model.fit() where it gave me the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-90-85c03bda7f8f> in <module>
     16 
     17 epochs=1
---> 18 history = model.fit(
     19   train_data,
     20   validation_data=test_data,

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1132                 _r=1):
   1133               callbacks.on_train_batch_begin(step)
-> 1134               tmp_logs = self.train_function(iterator)
   1135               if data_handler.should_sync:
   1136                 context.async_wait()

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    816     tracing_count = self.experimental_get_tracing_count()
    817     with trace.Trace(self._name) as tm:
--> 818       result = self._call(*args, **kwds)
    819       compiler = "xla" if self._jit_compile else "nonXla"
    820       new_tracing_count = self.experimental_get_tracing_count()

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    860       # This is the first call of __call__, so we have to initialize.
    861       initializers = []
--> 862       self._initialize(args, kwds, add_initializers_to=initializers)
    863     finally:
    864       # At this point we know that the initialization is complete (or less

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    701     self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
    702     self._concrete_stateful_fn = (
--> 703         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
    704             *args, **kwds))
    705 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   3018       args, kwargs = None, None
   3019     with self._lock:
-> 3020       graph_function, _ = self._maybe_define_function(args, kwargs)
   3021     return graph_function
   3022 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3412 
   3413           self._function_cache.missed.add(call_context_key)
-> 3414           graph_function = self._create_graph_function(args, kwargs)
   3415           self._function_cache.primary[cache_key] = graph_function
   3416 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3247     arg_names = base_arg_names + missing_arg_names
   3248     graph_function = ConcreteFunction(
-> 3249         func_graph_module.func_graph_from_py_func(
   3250             self._name,
   3251             self._python_function,

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    996         _, original_func = tf_decorator.unwrap(python_func)
    997 
--> 998       func_outputs = python_func(*func_args, **func_kwargs)
    999 
   1000       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    610             xla_context.Exit()
    611         else:
--> 612           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    613         return out
    614 

~/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    983           except Exception as e:  # pylint:disable=broad-except
    984             if hasattr(e, "ag_error_metadata"):
--> 985               raise e.ag_error_metadata.to_exception(e)
    986             else:
    987               raise

ValueError: in user code:

    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:839 train_function  *
        return step_function(self, iterator)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:829 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1262 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2734 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3423 _call_for_each_replica
        return fn(*args, **kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:822 run_step  **
        outputs = model.train_step(data)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:788 train_step
        y_pred = self(x, training=True)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1032 __call__
        outputs = call_fn(inputs, *args, **kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py:398 call
        outputs = layer(inputs, **kwargs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1028 __call__
        self._maybe_build(inputs)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:2722 _maybe_build
        self.build(input_shapes)  # pylint:disable=not-callable
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py:188 build
        input_channel = self._get_input_channel(input_shape)
    /Users/mongchanghsi/opt/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py:367 _get_input_channel
        raise ValueError('The channel dimension of the inputs '

    ValueError: The channel dimension of the inputs should be defined. Found `None`.

Here is my code for the model:

model = Sequential([
  layers.Conv2D(16, 3, padding='same', activation='relu'),
  layers.MaxPooling2D(),
  layers.Conv2D(32, 3, padding='same', activation='relu'),
  layers.MaxPooling2D(),
  layers.Conv2D(64, 3, padding='same', activation='relu'),
  layers.MaxPooling2D(),
  layers.Flatten(),
  layers.Dense(128, activation='relu'),
  layers.Dense(4)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

epochs=10
history = model.fit(
  train_data,
  validation_data=test_data,
  epochs=epochs
)

I understand that in the tutorial they used a inbuilt preprocessing function however I tried to build my own preprocessing function to facilitate my learning as well.

def preprocessing(image, target_size):
    # Extracting labels
    parts = tf.strings.split(image, os.sep)
    label = parts[-2]
    
    # Decoding image file
    path = tf.io.read_file(image)
    image = tf.image.decode_jpeg(path)
    
    # Cropping
    image = tf.image.crop_to_bounding_box(image, offset_height=25, offset_width=25, target_height=image_size, target_width=image_size)
    
    # Normalizing
    image = image / 255
    
    return image, label

list_ds = tf.data.Dataset.list_files(DATA_DIR + '/*/*')
preprocess_function = partial(preprocessing, target_size=image_size)
processed_data = list_ds.map(preprocess_function)
train_data = processed_data.take(8000).batch(batch_size)
test_data = processed_data.skip(8000).batch(batch_size)

Other information that I can provide is that the images are of grey-scale hence 1 channel and I have normalized it /255 in my preprocessing function and the image_size is 300 and batch_size is 100.

MongChangHsi
  • 103
  • 1
  • 12
  • I think I know where's the issue but not sure how to go ahead about it. I printed 'train_data' and got ``. If I'm not wrong the channel dimension is stated as None, but I understand that it should be 1 since its grey-scale. How should I go about it? – MongChangHsi Feb 02 '21 at 16:51

1 Answers1

0

Try this:

image = tf.image.decode_jpeg(path, channels=1)
Nicolas Gervais
  • 33,817
  • 13
  • 115
  • 143