0

So I'm trying to learn how machine learning works, and I started with a simple stock price prediction program. I've tried for days to narrow down issues, but I am at a halt with my research and progress. My issue is that the Accuracy does not increase, the val_accuracy does not change either. I've narrowed my data-set to be a smaller set to observe behavior, and of course, still not changing...

I have tried switching the loss, the activation, and tried a number of changes to preparing the data... I don't get what is going on? This is on one stock ticker only... (trying to use one model for top 100 prices)

My layers / model

def createModel(X_train):
    '''
        @description - 
    '''
    # Model
    model = Sequential()
    model.add(LSTM(512, activation = 'relu', return_sequences = True, input_shape = (X_train.shape[1:])))
    model.add(Dropout(0.3))
    model.add(LSTM(512, activation = 'relu', return_sequences = True))
    model.add(Dropout(0.3))
    model.add(LSTM(256, activation = 'relu', return_sequences = True))
    model.add(Dropout(0.3))
    model.add(LSTM(128, activation = 'relu', return_sequences = False))
    model.add(Dropout(0.2))
    model.add(Dense(1, activation = 'sigmoid'))

    # print(model.summary())
    # opt = tf.keras.optimizers.Adam(learning_rate = 0.01)
    model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
    return model

I have the following code snippets to read in the data and prepare the data.

        if filename.endswith('.csv'):
            data = pd.read_csv(filename)

        # Clean up file name to extract ticker
        filename = filename.replace('.csv', '')
        data = data.drop(['Dividends', 'Stock Splits'], axis = 1)
        data['Date'] = list(map(convertDateToNumber, data['Date']))
        data.set_index('Date', inplace = True)

        # Shift for a new column to do calculations on, then drop the shifted column after
        data['Per Change'] = data['Open'].shift(1)
        data['Percent Change'] = list(map(calculatePercentChange, data['Open'], data['Close']))
        data['Class'] = (list(map(classify, data['Open'], data['Close'])))

        # Drop the unnecessary headers now...
        data = data.drop('Per Change', 1)

        data.fillna(method = "ffill", inplace = True)
        data.dropna(inplace = True)

        trainingData = int(len(data) * 0.75)
        training_data = data.head(trainingData).values.tolist()
        training_data = scaler.fit_transform(training_data)

        testingData = int(len(data) * 0.25)
        testing_data = data.tail(testingData).values.tolist()
        testing_data = scaler.fit_transform(testing_data)

        X_train = []
        y_train = []

        for i in range(training_data.shape[0]):
            X_train.append(training_data[i])
            y_train.append(training_data[i, 2])
        X_train, y_train = np.array(X_train), np.array(y_train)
        X_train = X_train.reshape(X_train.shape[0], 1, X_train.shape[1])
        # y_train = y_train.reshape(y_train.shape[0], 1)

        # Test Data
        X_test = []
        y_test = []

        for i in range(testing_data.shape[0]):
            X_test.append(testing_data[i])
            y_test.append(testing_data[i, 2])
        X_test, y_test = np.array(X_test), np.array(y_test)
        X_test = X_test.reshape(X_test.shape[0], 1, X_test.shape[1])
        # y_test = y_test.reshape(y_test.shape[0], 1)

        # Create the model
        model = createModel(X_train)

        # Evaluate the model
        print('')
        loss, acc = model.evaluate(X_test, y_test)
        print("\n---------- Untrained model, accuracy: {:5.2f}% ----------\n".format(100 * acc))

        if os.path.isdir(modelPath.replace('data model.h5', '')):
            try:
                model = tf.keras.models.load_model(modelPath, compile = True)
                # Re-evaluate the model
                loss, acc = model.evaluate(X_test, y_test)
                print("\n---------- Restored model, accuracy: {:5.2f}% ----------\n".format(100 * acc))
            except: 
                pass

        tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir = log_dir, histogram_freq = 1)
        model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath = checkpointPath, save_weights_only = True, monitor = 'accuracy', mode = 'max', save_freq = 5)
        # To throw it all together... fit() trains the model
        model.fit(X_train, y_train, validation_data = (X_test, y_test), shuffle = True, epochs = 50, batch_size = 500, callbacks = [tensorboard_callback, model_checkpoint_callback])
        model.save(modelPath)

        # # Call the model protocol
        y_pred = model.predict(X_test)

        scale = 1 / scaler.scale_[0]
        y_test = y_test * scale
        y_pred = y_pred * scale

        plt.plot(y_test, color = 'blue', label = '{} Real Stock Price'.format(filename + ' ' + companyNameToTicker[filename]))
        plt.plot(y_pred, color = 'red', label = '{} Predicted Stock Price'.format(filename + ' ' + companyNameToTicker[filename]))
        plt.title('{} Stock Price Prediction'.format(filename + ' ' + companyNameToTicker[filename]))
        plt.xlabel('Time??')
        plt.ylabel('{} Stock Prediction'.format(filename + ' ' + companyNameToTicker[filename]))
        plt.legend()
        # plt.ion()
        # plt.pause(0.05)
        plt.show()

Here is the output the above code produces...

1/1 [==============================] - ETA: 0s - loss: 0.5488 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (10.187773). Check your callbacks.
1/1 [==============================] - 1s 822ms/step - loss: 0.5488 - accuracy: 0.0476 - val_loss: 0.4729 - val_accuracy: 0.1429
Epoch 3/50
1/1 [==============================] - 1s 517ms/step - loss: 0.5472 - accuracy: 0.0476 - val_loss: 0.4725 - val_accuracy: 0.1429
Epoch 4/50
1/1 [==============================] - 0s 485ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4723 - val_accuracy: 0.1429
Epoch 5/50
1/1 [==============================] - ETA: 0s - loss: 0.5484 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.438490). Check your callbacks.
1/1 [==============================] - 1s 507ms/step - loss: 0.5484 - accuracy: 0.0476 - val_loss: 0.4725 - val_accuracy: 0.1429
Epoch 6/50
1/1 [==============================] - 1s 527ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4732 - val_accuracy: 0.1429
Epoch 7/50
1/1 [==============================] - 0s 413ms/step - loss: 0.5481 - accuracy: 0.0476 - val_loss: 0.4738 - val_accuracy: 0.1429
Epoch 8/50
1/1 [==============================] - 0s 491ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4743 - val_accuracy: 0.1429
Epoch 9/50
1/1 [==============================] - 0s 408ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4748 - val_accuracy: 0.1429
Epoch 10/50
1/1 [==============================] - ETA: 0s - loss: 0.5478 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.434556). Check your callbacks.
1/1 [==============================] - 0s 482ms/step - loss: 0.5478 - accuracy: 0.0476 - val_loss: 0.4751 - val_accuracy: 0.1429
Epoch 11/50
1/1 [==============================] - 1s 535ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4754 - val_accuracy: 0.1429
Epoch 12/50
1/1 [==============================] - 0s 408ms/step - loss: 0.5485 - accuracy: 0.0476 - val_loss: 0.4758 - val_accuracy: 0.1429
Epoch 13/50
1/1 [==============================] - 0s 392ms/step - loss: 0.5487 - accuracy: 0.0476 - val_loss: 0.4764 - val_accuracy: 0.1429
Epoch 14/50
1/1 [==============================] - 0s 460ms/step - loss: 0.5488 - accuracy: 0.0476 - val_loss: 0.4768 - val_accuracy: 0.1429
Epoch 15/50
1/1 [==============================] - ETA: 0s - loss: 0.5486 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.388608). Check your callbacks.
1/1 [==============================] - 0s 397ms/step - loss: 0.5486 - accuracy: 0.0476 - val_loss: 0.4770 - val_accuracy: 0.1429
Epoch 16/50
1/1 [==============================] - 1s 573ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4770 - val_accuracy: 0.1429
Epoch 17/50
1/1 [==============================] - 0s 456ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4766 - val_accuracy: 0.1429
Epoch 18/50
1/1 [==============================] - 0s 392ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4763 - val_accuracy: 0.1429
Epoch 19/50
1/1 [==============================] - 0s 404ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4760 - val_accuracy: 0.1429
Epoch 20/50
1/1 [==============================] - ETA: 0s - loss: 0.5479 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.362628). Check your callbacks.

1/1 [==============================] - ETA: 0s - loss: 0.5488 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (10.187773). Check your callbacks.
1/1 [==============================] - 1s 822ms/step - loss: 0.5488 - accuracy: 0.0476 - val_loss: 0.4729 - val_accuracy: 0.1429
Epoch 3/50
1/1 [==============================] - 1s 517ms/step - loss: 0.5472 - accuracy: 0.0476 - val_loss: 0.4725 - val_accuracy: 0.1429
Epoch 4/50
1/1 [==============================] - 0s 485ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4723 - val_accuracy: 0.1429
Epoch 5/50
1/1 [==============================] - ETA: 0s - loss: 0.5484 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.438490). Check your callbacks.
1/1 [==============================] - 1s 507ms/step - loss: 0.5484 - accuracy: 0.0476 - val_loss: 0.4725 - val_accuracy: 0.1429
Epoch 6/50
1/1 [==============================] - 1s 527ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4732 - val_accuracy: 0.1429
Epoch 7/50
1/1 [==============================] - 0s 413ms/step - loss: 0.5481 - accuracy: 0.0476 - val_loss: 0.4738 - val_accuracy: 0.1429
Epoch 8/50
1/1 [==============================] - 0s 491ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4743 - val_accuracy: 0.1429
Epoch 9/50
1/1 [==============================] - 0s 408ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4748 - val_accuracy: 0.1429
Epoch 10/50
1/1 [==============================] - ETA: 0s - loss: 0.5478 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.434556). Check your callbacks.
1/1 [==============================] - 0s 482ms/step - loss: 0.5478 - accuracy: 0.0476 - val_loss: 0.4751 - val_accuracy: 0.1429
Epoch 11/50
1/1 [==============================] - 1s 535ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4754 - val_accuracy: 0.1429
Epoch 12/50
1/1 [==============================] - 0s 408ms/step - loss: 0.5485 - accuracy: 0.0476 - val_loss: 0.4758 - val_accuracy: 0.1429
Epoch 13/50
1/1 [==============================] - 0s 392ms/step - loss: 0.5487 - accuracy: 0.0476 - val_loss: 0.4764 - val_accuracy: 0.1429
Epoch 14/50
1/1 [==============================] - 0s 460ms/step - loss: 0.5488 - accuracy: 0.0476 - val_loss: 0.4768 - val_accuracy: 0.1429
Epoch 15/50
1/1 [==============================] - ETA: 0s - loss: 0.5486 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.388608). Check your callbacks.
1/1 [==============================] - 0s 397ms/step - loss: 0.5486 - accuracy: 0.0476 - val_loss: 0.4770 - val_accuracy: 0.1429
Epoch 16/50
1/1 [==============================] - 1s 573ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4770 - val_accuracy: 0.1429
Epoch 17/50
1/1 [==============================] - 0s 456ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4766 - val_accuracy: 0.1429
Epoch 18/50
1/1 [==============================] - 0s 392ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4763 - val_accuracy: 0.1429
Epoch 19/50
1/1 [==============================] - 0s 404ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4760 - val_accuracy: 0.1429
Epoch 20/50
1/1 [==============================] - ETA: 0s - loss: 0.5479 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.362628). Check your callbacks.
1/1 [==============================] - 0s 452ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4758 - val_accuracy: 0.1429
Epoch 21/50
1/1 [==============================] - 0s 473ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4753 - val_accuracy: 0.1429
Epoch 22/50
1/1 [==============================] - 0s 428ms/step - loss: 0.5496 - accuracy: 0.0476 - val_loss: 0.4744 - val_accuracy: 0.1429
Epoch 23/50
1/1 [==============================] - 1s 584ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4741 - val_accuracy: 0.1429
Epoch 24/50
1/1 [==============================] - 0s 446ms/step - loss: 0.5478 - accuracy: 0.0476 - val_loss: 0.4743 - val_accuracy: 0.1429
Epoch 25/50
1/1 [==============================] - ETA: 0s - loss: 0.5476 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.530422). Check your callbacks.
1/1 [==============================] - 1s 646ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4746 - val_accuracy: 0.1429
Epoch 26/50
1/1 [==============================] - 1s 506ms/step - loss: 0.5487 - accuracy: 0.0476 - val_loss: 0.4756 - val_accuracy: 0.1429
Epoch 27/50
1/1 [==============================] - 0s 413ms/step - loss: 0.5482 - accuracy: 0.0476 - val_loss: 0.4765 - val_accuracy: 0.1429
Epoch 28/50
1/1 [==============================] - 0s 382ms/step - loss: 0.5481 - accuracy: 0.0476 - val_loss: 0.4772 - val_accuracy: 0.1429
Epoch 29/50
1/1 [==============================] - 0s 421ms/step - loss: 0.5487 - accuracy: 0.0476 - val_loss: 0.4774 - val_accuracy: 0.1429
Epoch 30/50
1/1 [==============================] - ETA: 0s - loss: 0.5483 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.657228). Check your callbacks.
1/1 [==============================] - 1s 955ms/step - loss: 0.5483 - accuracy: 0.0476 - val_loss: 0.4782 - val_accuracy: 0.1429
Epoch 31/50
1/1 [==============================] - 1s 634ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4792 - val_accuracy: 0.1429
Epoch 32/50
1/1 [==============================] - 0s 364ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4800 - val_accuracy: 0.1429
Epoch 33/50
1/1 [==============================] - 0s 404ms/step - loss: 0.5478 - accuracy: 0.0476 - val_loss: 0.4808 - val_accuracy: 0.1429
Epoch 34/50
1/1 [==============================] - 0s 381ms/step - loss: 0.5477 - accuracy: 0.0476 - val_loss: 0.4812 - val_accuracy: 0.1429
Epoch 35/50
1/1 [==============================] - ETA: 0s - loss: 0.5476 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.342873). Check your callbacks.
1/1 [==============================] - 1s 524ms/step - loss: 0.5476 - accuracy: 0.0476 - val_loss: 0.4810 - val_accuracy: 0.1429
Epoch 36/50
1/1 [==============================] - 0s 442ms/step - loss: 0.5485 - accuracy: 0.0476 - val_loss: 0.4808 - val_accuracy: 0.1429
Epoch 37/50
1/1 [==============================] - 1s 514ms/step - loss: 0.5493 - accuracy: 0.0476 - val_loss: 0.4805 - val_accuracy: 0.1429
Epoch 38/50
1/1 [==============================] - 1s 630ms/step - loss: 0.5503 - accuracy: 0.0476 - val_loss: 0.4806 - val_accuracy: 0.1429
Epoch 39/50
1/1 [==============================] - ETA: 0s - loss: 0.5478 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.724169). Check your callbacks.
1/1 [==============================] - 1s 1s/step - loss: 0.5478 - accuracy: 0.0476 - val_loss: 0.4812 - val_accuracy: 0.1429
Epoch 40/50
1/1 [==============================] - ETA: 0s - loss: 0.5475 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.356633). Check your callbacks.
1/1 [==============================] - 0s 400ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4813 - val_accuracy: 0.1429
Epoch 41/50
1/1 [==============================] - 1s 625ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4814 - val_accuracy: 0.1429
Epoch 42/50
1/1 [==============================] - 1s 671ms/step - loss: 0.5481 - accuracy: 0.0476 - val_loss: 0.4810 - val_accuracy: 0.1429
Epoch 43/50
1/1 [==============================] - 1s 527ms/step - loss: 0.5482 - accuracy: 0.0476 - val_loss: 0.4803 - val_accuracy: 0.1429
Epoch 44/50
1/1 [==============================] - 1s 688ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4797 - val_accuracy: 0.1429
Epoch 45/50
1/1 [==============================] - ETA: 0s - loss: 0.5475 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.479657). Check your callbacks.
1/1 [==============================] - 1s 505ms/step - loss: 0.5475 - accuracy: 0.0476 - val_loss: 0.4789 - val_accuracy: 0.1429
Epoch 46/50
1/1 [==============================] - 1s 637ms/step - loss: 0.5479 - accuracy: 0.0476 - val_loss: 0.4776 - val_accuracy: 0.1429
Epoch 47/50
1/1 [==============================] - 0s 383ms/step - loss: 0.5490 - accuracy: 0.0476 - val_loss: 0.4772 - val_accuracy: 0.1429
Epoch 48/50
1/1 [==============================] - 0s 420ms/step - loss: 0.5486 - accuracy: 0.0476 - val_loss: 0.4769 - val_accuracy: 0.1429
Epoch 49/50
1/1 [==============================] - 0s 428ms/step - loss: 0.5478 - accuracy: 0.0476 - val_loss: 0.4769 - val_accuracy: 0.1429
Epoch 50/50
1/1 [==============================] - ETA: 0s - loss: 0.5482 - accuracy: 0.0476WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.509478). Check your callbacks.
1/1 [==============================] - 0s 417ms/step - loss: 0.5482 - accuracy: 0.0476 - val_loss: 0.4772 - val_accuracy: 0.1429
1 Physical GPUs, 1 Logical GPU
WARNING:tensorflow:Layer lstm_16 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_17 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_18 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_19 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU

1/1 [==============================] - 0s 67ms/step - loss: 0.6931 - accuracy: 0.0714

---------- Untrained model, accuracy:  7.14% ----------

WARNING:tensorflow:Layer lstm will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernel since it doesn't meet the cuDNN kernel criteria. It will use generic GPU kernel as fallback when running on GPU
1/1 [==============================] - 0s 3ms/step - loss: 0.4772 - accuracy: 0.1429

---------- Restored model, accuracy: 14.29% ----------

Shapes of datasets: X_train: (9538, 1, 7) y_train: (9538,) X_test: (3179, 1, 7) y_test (3179,)

geoph9
  • 357
  • 3
  • 18
IAC93
  • 139
  • 1
  • 1
  • 11
  • What are the shapes of your train and test datasets? – geoph9 Jun 02 '20 at 19:15
  • X_train: (9538, 1, 7) y_train: (9538,) X_test: (3179, 1, 7) y_test (3179,) @geoph9 – IAC93 Jun 02 '20 at 19:18
  • Check this: https://stackoverflow.com/questions/37213388/keras-accuracy-does-not-change – geoph9 Jun 02 '20 at 19:21
  • So the optimizer trick did not work, and I believe the data is good, printed it out and looks perfectly fine. Learning rate also did not do anything for me... Well accuracy changes but the predicted is now a straight line – IAC93 Jun 02 '20 at 19:45
  • Try shuffling your data and see if there are any changes. For some reason the model stops learning. Also, why is `X_train` and `X_test` 3-dimensional? What is the number of features? – geoph9 Jun 02 '20 at 19:52
  • The model to my understanding needed to take in a 3-d array, so I reshaped the X_train and X_test, I want to predict the high price, but I use the open high low closing price and the volume. I get 'ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 7]' If i don't reshape it to 3d – IAC93 Jun 02 '20 at 20:04
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/215196/discussion-between-geoph9-and-iac93). – geoph9 Jun 02 '20 at 20:18

1 Answers1

0

I think the data contained in the daily K-line of stocks is still too limited. You can try to add more data

  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Apr 06 '23 at 16:15