I am trying to configure a model that I previously trained to classify images in a such a way that it accepts images as base64-strings (instead of a NumPy array), converts them to a NumPy array and then performs the prediction. How do I add a layer on top of my regular input layer that accepts strings and outputs a NumPy array?
So I've already pre-trained a model that predicts images based on the ResNet architecture. Having looked at this and this answer, I am trying to create a Lambda layer that converts strings to RGB jpeg images. I have done this as shown in the sample code below:
image = tf.placeholder(shape=[], dtype=tf.string)
input_tensor = keras.layers.Input(shape = (1,), tensor = image, dtype=tf.string)
x = keras.layers.Lambda(lambda image: tf.image.decode_jpeg(image))(input_tensor)
output_tensor = model(x)
new_model = Model(input_tensor, output_tensor)
Where model()
is the Keras keras.models.Model
model that I have pre-trained.
I am expecting new_model()
to be the new Keras model that has 1 extra layer on top of my previous model, which accepts base64-string and outputs a NumPy array into the next layer.
However, the third line of my code raises the following error:
TypeError: Input 'contents' of 'DecodeJpeg' Op has type float32 that does not match expected type of string.
My understanding of this is that the 'image' in the Lambda layer that uses the decode_jpeg()
is a float32 instead of a string, which seems odd to me as I have set the dtype of both the placeholder as well as the Input layer to tf.string.
I have searched all over stackoverflow for this but can't find a solution for this error. It appears this question also has not been able to find a solution for this specific issue.
EDIT 1: corrected typo and added full error message The full error message is show below:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
509 as_ref=input_arg.is_ref,
--> 510 preferred_dtype=default_dtype)
511 except TypeError as err:
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
1103 if ret is None:
-> 1104 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1105
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _TensorTensorConversionFunction(t, dtype, name, as_ref)
946 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 947 (dtype.name, t.dtype.name, str(t)))
948 return t
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("lambda_28/Placeholder:0", shape=(?, 1), dtype=float32)'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-47-5793b0703860> in <module>
1 image = tf.placeholder(shape=[], dtype=tf.string)
2 input_tensor = Input(shape = (1,), tensor = image, dtype=tf.string)
----> 3 x = Lambda(lambda image: tf.image.decode_jpeg(image))(input_tensor)
4 output_tensor = model(x)
5
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
472 if all([s is not None
473 for s in to_list(input_shape)]):
--> 474 output_shape = self.compute_output_shape(input_shape)
475 else:
476 if isinstance(input_shape, list):
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/layers/core.py in compute_output_shape(self, input_shape)
650 else:
651 x = K.placeholder(shape=input_shape)
--> 652 x = self.call(x)
653 if isinstance(x, list):
654 return [K.int_shape(x_elem) for x_elem in x]
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/layers/core.py in call(self, inputs, mask)
685 if has_arg(self.function, 'mask'):
686 arguments['mask'] = mask
--> 687 return self.function(inputs, **arguments)
688
689 def compute_mask(self, inputs, mask=None):
<ipython-input-47-5793b0703860> in <lambda>(image)
1 image = tf.placeholder(shape=[], dtype=tf.string)
2 input_tensor = Input(shape = (1,), tensor = image, dtype=tf.string)
----> 3 x = Lambda(lambda image: tf.image.decode_jpeg(image))(input_tensor)
4 output_tensor = model(x)
5
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/ops/gen_image_ops.py in decode_jpeg(contents, channels, ratio, fancy_upscaling, try_recover_truncated, acceptable_fraction, dct_method, name)
946 try_recover_truncated=try_recover_truncated,
947 acceptable_fraction=acceptable_fraction, dct_method=dct_method,
--> 948 name=name)
949 _result = _op.outputs[:]
950 _inputs_flat = _op.inputs
~/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
531 if input_arg.type != types_pb2.DT_INVALID:
532 raise TypeError("%s expected type of %s." %
--> 533 (prefix, dtypes.as_dtype(input_arg.type).name))
534 else:
535 # Update the maps with the default, if needed.
TypeError: Input 'contents' of 'DecodeJpeg' Op has type float32 that does not match expected type of string.