Firstly, I'd like to preface by saying I'm a complete rookie; for a project, I'm required to create an "AI" that can analyze a game's "position" and output the correct move, and I chose to train a neural network based on a CSV file. After roughly 20 epochs, the accuracy will stay locked at 0.2022, and the loss will not meaningfully increase (stays above 6.3). I've changed the amount of neurons, the batch size, shuffling, varying algorithms, and learning rate, all to no avail.
train = pd.read_csv(io.BytesIO(uploaded['snowballdata.csv']))
features = train.copy()
test = features.head(5)
labels = features.pop('CorrectMove')
features = np.array(features)
print(features)
uploaded2 = files.upload()
test = pd.read_csv(io.BytesIO(uploaded2['test4.csv']))
model = Sequential()
model.add(Dense(6, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(20))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='nadam', metrics = ['accuracy'])
K.set_value(model.optimizer.learning_rate, 0.01)
model.fit(features, labels, epochs=30, shuffle = True, batch_size = 100)
As mentioned, I tried tinkering with the layers a slight bit. Adding more neurons/more layers resulted in less learning efficiency in the first 20 epochs, shuffling the data and modifying batch size changed nothing, dropout also did very little, and using different algorithms yielded vastly more inaccurate results.
The dataset has six variables as an input, and the output should be an integer from 1 to 3. Any suggestions as to how I can improve it?