0

Building a sequence

simple_seq= [x for x in list(range(1000)) if x % 3 == 0]

after reshape and split

x_train, x_test shape = (159, 5, 1)
y_train, y_test shape = (159, 2)

Model

model = Sequential(name='acc_test')
model.add(Conv1D(
  kernel_size = 2,
  filters= 128,
  strides= 1,
  use_bias= True,
  activation= 'relu',
  padding='same', 
  input_shape=(x_train.shape[1], x_train.shape[2])))

model.add(AveragePooling1D(pool_size =(2), strides= [1]))
model.add(Flatten())
model.add(Dense(2))

optimizer = Adam(lr=0.001)
model.compile( optimizer= optimizer, loss= 'mse',  metrics=['accuracy'])

Train

hist = model.fit(
          x=x_train, 
          y = y_train, 
          epochs=100, 
          validation_split=0.2)

The Result:

Epoch 100/100
127/127 [==============================] - 0s 133us/sample - loss: 0.0096 - acc: 1.0000 - val_loss: 0.6305 - val_acc: 1.0000

But if using this model to predict:

x_test[-1:] = array([[[9981],
        [9984],
        [9987],
        [9990],
        [9993]]])

model.predict(x_test[-1:])
result is: array([[10141.571, 10277.236]], dtype=float32)

How can the vall_acc be 1 if the result is so far from the truth, result was


step    1          2
true [9996,      9999     ]
pred [10141.571, 10277.236] 
Sentinan
  • 89
  • 2
  • 7
  • You need to fully define the learning problem, its not clear to me what are the inputs and outputs of the model ( and the training data), is this regression or classification? If its regression then it makes no sense to look at accuracy – Dr. Snoopy Nov 06 '19 at 11:00
  • yes indeed it is a regression task as you can see here ```model.add(Dense(2))``` so what metric should be used for regression than? – Sentinan Nov 06 '19 at 11:12
  • 1
    No metric is needed for regression, the loss itself is a metric – Dr. Snoopy Nov 06 '19 at 13:35
  • You may find the discussion in [What function defines accuracy in Keras when the loss is mean squared error (MSE)?](https://stackoverflow.com/questions/48775305/what-function-defines-accuracy-in-keras-when-the-loss-is-mean-squared-error-mse) useful. And indeed, as @MatiasValdenegro says, you don't need any additional metric for regression - the loss itself is a metric – desertnaut Nov 06 '19 at 15:26

2 Answers2

1

Accuracy metric is only valid for classification tasks. Therefore, if you use accuracy as the metric in a regression tasks, the reported metric values may not be valid at all. From your code, I feel that you are having a regression task, so this shouldn't be used.

Below is a list of the metrics that you can use in Keras on regression problems.

Mean Squared Error: mean_squared_error, MSE or mse
Mean Absolute Error: mean_absolute_error, MAE, mae
Mean Absolute Percentage Error: mean_absolute_percentage_error, MAPE, mape
Cosine Proximity: cosine_proximity, cosine

You can read about some theory at link and see some keras exampler code at link.

Sorry, little short of time, but I am sure these links will really help you. :)

Anant Mittal
  • 1,923
  • 9
  • 15
0

By the range of your true/predicted values and used loss - it seems like you're trying to solve regression problem, not classification.

So if I understood you correctly - you're trying to predict two numeric values based on input - instead of predicting, which of two classes is valid for these input.

If so - you shouldn't use accuracy metric. Because it'll just compare indices of maximal input for each input sample/prediction (a bit simplified). E.g. 9996 < 9999 and 10141.571 < 10277.236.