1

I'm trying to use forward_features to get instance keys for cloudml, but I always get errors that I'm not sure how to fix. The preprocessing section that uses tf.Transform is a modification of https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/reddit_tft where the instance key is a string and everything else is a bunch of floats.

def gzip_reader_fn():
      return tf.TFRecordReader(options=tf.python_io.TFRecordOptions(
          compression_type=tf.python_io.TFRecordCompressionType.GZIP))


def get_transformed_reader_input_fn(transformed_metadata,
                                    transformed_data_paths,
                                    batch_size,
                                    mode):
  """Wrap the get input features function to provide the runtime arguments."""
  return input_fn_maker.build_training_input_fn(
      metadata=transformed_metadata,
      file_pattern=(
          transformed_data_paths[0] if len(transformed_data_paths) == 1
          else transformed_data_paths),
      training_batch_size=batch_size,
      label_keys=[],
      #feature_keys=FEATURE_COLUMNS,
      #key_feature_name='example_id',
      reader=gzip_reader_fn,
      reader_num_threads=4,
      queue_capacity=batch_size * 2,
      randomize_input=(mode != tf.contrib.learn.ModeKeys.EVAL),
      num_epochs=(1 if mode == tf.contrib.learn.ModeKeys.EVAL else None))

estimator = KMeansClustering(num_clusters=8, 
      initial_clusters=KMeansClustering.KMEANS_PLUS_PLUS_INIT, 
      kmeans_plus_plus_num_retries=32,
      relative_tolerance=0.0001)

estimator = tf.contrib.estimator.forward_features(
      estimator,
      'example_id')

train_input_fn = get_transformed_reader_input_fn(
      transformed_metadata, args.train_data_paths, args.batch_size,
      tf.contrib.learn.ModeKeys.TRAIN)

estimator.train(input_fn=train_input_fn)

If I were to pass in the keys column along side the training features, then I get the error Tensors in list passed to 'values' of 'ConcatV2' Op have types [float32, float32, string, float32, float32, float32, float32, float32, float32, f loat32, float32, float32, float32, float32, float32, float32, float32, float32, float32, float32, float32, float32, float32, float32] that don't all match. However, if I were to not pass in the instance keys during training, then I get the value error saying that the key doesn't exist in the features. Also, if I were to change the key column name in the forward_features section from 'example_id' to some random name that isn't a column, then I still get the former error instead of the latter. Can anyone help me make sense of this?

Max Deng
  • 61
  • 1

1 Answers1

2

Please check the following:

(1) Forward features only works with TF.estimator. Ensure that you are not using contrib.learn.estimator. (update: you are using a class that inherits from tf.estimator)

(2) Make sure your input function reads in the key-column. So, the key column has to be part of your input dataset.

(3) In the case of tf.transform, #2 means that the transform metadata has to reflect the schema of the key. The error message you are seeing seems to indicate that the schema specified it as a float and it's actually a string. Or something like that.

(4) Make sure the key column is NOT used by your model. So, you should not create a FeatureColumn with it. In other words, the model will simply pass through the key that is read by the input_fn to the predictor.

(5) If you don't see the key in the output, see if this workaround helps you:

https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/07_structured/babyweight/trainer/model.py#L132

Essentially, forward_features changes the graph in memory but not the exported signature. My workaround fixes this.

Lak
  • 3,876
  • 20
  • 34
  • Doesn't tf.contrib.factorization.kmeansclustering inherit from tf.estimator? Are you thinking of tf.contrib.learn.kmeansclustering? – Max Deng Mar 08 '18 at 17:06
  • I tried the things that you've said, but I'm still getting the Tensors in list passed to 'values' of 'ConcatV2' Op have types that don't all match error. – Max Deng Mar 12 '18 at 16:53
  • Ah, I see the problem. It's here: https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/contrib/factorization/python/ops/kmeans.py#L108 -- the model simply takes all the features and tries to use them all to compute Euclidean distances. In your case, of course, you have a key which should not be used in the model. Let me follow up with author(s) of the module on the best way to fix this problem. – Lak Mar 12 '18 at 18:06