1

For each sample, I have a 2D array that is NOT an image that I would like to do inference on via tensorflow serving. In the past, I have been able to deploy tensorflow serving successfully thanks to the answer to this post which uses the following serving_input_receiver_fn:

HEIGHT = 199
WIDTH = 199
CHANNELS = 1

def serving_input_receiver_fn():

  def decode_and_resize(image_str_tensor):
     """Decodes jpeg string, resizes it and returns a uint8 tensor."""
     image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
     image = tf.expand_dims(image, 0)
     image = tf.image.resize_bilinear(
         image, [HEIGHT, WIDTH], align_corners=False)
     image = tf.squeeze(image, squeeze_dims=[0])
     image = tf.cast(image, dtype=tf.uint8)
     return image

 # Optional; currently necessary for batch prediction.
 key_input = tf.placeholder(tf.string, shape=[None]) 
 key_output = tf.identity(key_input)

 input_ph = tf.placeholder(tf.string, shape=[None], name='image_binary')
 images_tensor = tf.map_fn(
      decode_and_resize, input_ph, back_prop=False, dtype=tf.uint8)
 images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32) 

 return tf.estimator.export.ServingInputReceiver(
     {'images': images_tensor},
     {'bytes': input_ph})

However, for nonimage arrays, the following becomes unclear:

  1. How to decode the encoded string tensor. I took a look at tf.io.decode_image, but it doesn't seem to preserve the 2D array dimension.
  2. How to encode the array. In the case of images, I encoded the image data itself by base64.b64encode(img_data). For general 2D arrays, how should I encode them?

In short what is the way to generalize the linked post's answer to the non image array case?

alpaca
  • 1,211
  • 13
  • 23

0 Answers0