6

So, I've been struggling to understand what the main task of a serving_input_fn() is when a trained model is exported in Tensorflow for serving purposes. There are some examples online that explain it but I'm having problems defining it for myself.

The problem I'm trying to solve is a regression problem where I have 29 inputs and one output. Is there a template for creating a corresponding serving input function for that? What if I use a one-class classification problem? Would my serving input function need to change or can I use the same function?

And finally, do I always need serving input functions or is it only when I use tf.estimator to export my model?

Vahid Zadeh
  • 564
  • 4
  • 25

2 Answers2

9

You need a serving input function if you want your model to be able to make predictions. The serving_input_fn specifies what the caller of the predict() method will have to provide. You are essentially telling the model what data it has to get from the user.

If you have 29 inputs, your serving input function might look like:

def serving_input_fn():
    feature_placeholders = {
      'var1' : tf.placeholder(tf.float32, [None]),
      'var2' : tf.placeholder(tf.float32, [None]),
      ...
    }
    features = {
        key: tf.expand_dims(tensor, -1)
        for key, tensor in feature_placeholders.items()
    }
    return tf.estimator.export.ServingInputReceiver(features, 
                                                    feature_placeholders)

This would typically come in as JSON:

{"instances": [{"var1": [23, 34], "var2": [...], ...}]}

P.S. The output is not part of the serving input function because this is about the input to predict. If you are using a pre-made estimator, the output is already predetermined. If you are writing a custom estimator, you'd write an export signature.

Lak
  • 3,876
  • 20
  • 34
  • Thanks @Lak. Your answer was very helpful. If I train, say, a regressor by using a tf.Estimator and define my serving_input_fn() like what you advised, everything looks fine. But what if I use a custom estimator where I cannot pass in an input function? Is it the signature of the model that determines the inputs? How would you construct it in the case with 29 inputs and one output? – Vahid Zadeh Jan 30 '18 at 06:48
  • When I try this code for my model I get "too many values to unpack" – Daniel Nov 19 '18 at 19:23
4

If you are writing a custom Estimator, the serving input function remains the same as above. That is still the input to predict().

What changes is that you have to write a predictions dictionary for the output and specify it when creating an EstimatorSpec

Take a look at the serving input function in model.py and the sequence_regressor in task.py in this directory:

https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/courses/machine_learning/deepdive/09_sequence/sinemodel/trainer

That is an example of a custom regression model that takes N inputs and has one output.

Lak
  • 3,876
  • 20
  • 34
  • Thanks @Lak. That makes quite a sense. But what if I save my model with the savedModel builder? That's what I'm using for my custom model and it does not allow me to pass in a serving_input function. I can save the model with no issues; but the serving_input_fn() is missing. – Vahid Zadeh Jan 30 '18 at 17:05
  • 1
    Yeah. You would need to save the model with multiple heads. Look at the cloud ml engine core tensorflow (non estimator) census sample – Lak Jan 30 '18 at 19:25