I created and saved a neural network on a script called "training_net.py". As recommended in the sklearn website (http://scikit-learn.org/stable/modules/neural_networks_supervised.html#tips-on-practical-use) I scaled the training set and used the same scaler for the test set.
Now I have a script, called "prediction.py", that takes as input a vector of parameters and the neural network created in "training_net.py" and gives a classification as output.
My doubt is about the scaling of my input in "prediction.py". I guess I should scale the input with the same transformation I used in "training_net.py", but I don't understand how to get the transformation parameters from the scaler used.
When I do scaler.get_params() I just get the following information: {'copy': True, 'with_mean': True, 'with_std': True}
Here is a small code extract to show better what I mean.
training_net.py
#scale training and test data
scaler = StandardScaler()
scaler.fit(training_data)
training_data = scaler.transform(training_data)
test_data = scaler.transform(test_data)
clf.fit(training_data, training_label)
nn_name = "NN.pkl"
joblib.dump(clf, nn_name)
clf = joblib.load(nn_name)
print clf.score(test_data, test_label)
prediction.py
model_name = "NN.pkl"
clf = joblib.load(model_name)
#need to scale input_parameters before predicting!
#?
print clf.predict(input_parameters)