I am fairly new to machine learning but I have put together a LSTM network for educational purposes that seems to be working fairly well.
I have not been able to fully understand the numerical ranges for input and output variables. I normalized my input and training data so all variables are centered at 0 with a standard deviation of 1. When I test the network, all of my predictions are positive between 0 and 1, there are never any negative values even though the training data contained negative values.
I have worked around this by creating one output for positive numbers and another for negative in my training data. For example:
Original training data:
data
-1.0
-0.5
0.0
0.5
1.0
becomes:
pos_data neg_data
0.0 1.0
0.0 0.5
0.0 0.0
0.5 0.0
1.0 0.0
After I run the model, I convert the pos_data and neg_data back to a single column with positive and negative values. This seems to work, but feels like it should be unnecessary.
Does Keras allow negative values in the input or training data? If so, does anyone have any ideas why I would only be getting positive predictions when the model was trained with both positive and negative values?
Thank you!