1

I am trying to write Linear classifier using Tensorflow using the following code which works.

m = LinearClassifier(model_dir = model_dir, feature_columns = wide_columns)
m.fit (input_fn=training, steps=FLAGS.train_steps)
results = h.evaluate(input_fn=test, steps=1)
for key in sorted(results):
    print ("%s: %s", key, results[key])

However, I am interested in having an ndarray of predictions (i.e. array of 0s and 1s) for each test-feature. I would like to compute some more values (than accuracy and precision) based on these predictions.

Following is the output I get:

accuracy: 0.931035
accuracy/baseline_label_mean: 0.931035
accuracy/threshold_0.500000_mean: 0.931035
auc: 0.5
global_step: 202
labels/actual_label_mean: 0.931035
labels/prediction_mean: 1.0
loss: 1.11758e+11
precision/positive_threshold_0.500000_mean: 0.931035
recall/positive_threshold_0.500000_mean: 1.0

Following is the output I expect: (first five numbers are training features and 1 and 0 is the label of classifier)

1,2,3,4,5 : 1
3,4,4,2,1 : 0
1,2,3,4,1 : 1
1,2,3,4,5 : 1
4,4,2,2,2 : 0
5,4,1,2,1 : 0

How can I get such an output from Tensorflow APIs ?

Trojosh
  • 565
  • 1
  • 5
  • 15
  • What you want in the end is adding a key to each row of features you are going to predict on. Check this: https://stackoverflow.com/questions/44381879/training-and-predicting-with-instance-keys If you still want to pass the features through the model without being 'kind of' processed (although in your case it is precisely the features which need to be processed) I think you can apply the same idea as of the key. Though I do still think it is easier to add a key (or hash the entry rows) and then join the results with the corresponding key to get the features. – Guille Nov 30 '17 at 11:08

1 Answers1

0

I found a solution after some research. I hope it helps someone:

predictions = m.predict(input_fn = lambda:input_fn(df_test), as_iterable = False)
for p in predictions:
    print(str(p), "\n")
Trojosh
  • 565
  • 1
  • 5
  • 15