0

I am trying to develop a simple algorithm to learn a Time Table of 5 (Multiplication of 5), but got some weird results as indicated below:

One of the codes I've just tried:


import operator
import math
import random

import numpy as np

from deap import algorithms
from deap import base
from deap import creator
from deap import tools
from deap import gp

X = []
Y = []
Z = []

for i in range(10000):
    n = random.randint(0, 300)
    X.append(n)
    Y.append(n*5)
    #Z.append(n**2)

X = np.array(X)
Y = np.array(Y)
#Z = np.array(Z)

from keras.models import Sequential
from tensorflow.keras.layers import Dense

model = Sequential()
model.add(Dense(10, activation='relu', input_shape=(1,)))
model.add(Dense(10, activation='relu'))
model.compile(loss='MSE', optimizer='adam', metrics=['accuracy']) #cross-entropy could show a better result as loss-function.
model.fit(X, Y, batch_size=50, validation_split=0.05, epochs=10, verbose=1, shuffle=1)

test_array=np.array([4,27,100,121,9])
print(model.predict([test_array]))


But got some weird results:


Epoch 1/10
190/190 [==============================] - 1s 2ms/step - loss: 673326.2091 - accuracy: 0.0000e+00 - val_loss: 587270.0625 - val_accuracy: 0.0000e+00
Epoch 2/10
190/190 [==============================] - 0s 687us/step - loss: 542008.4090 - accuracy: 0.0000e+00 - val_loss: 429100.4688 - val_accuracy: 0.0000e+00
Epoch 3/10
190/190 [==============================] - 0s 815us/step - loss: 392556.8519 - accuracy: 0.0000e+00 - val_loss: 302378.4375 - val_accuracy: 0.0000e+00
Epoch 4/10
190/190 [==============================] - 0s 637us/step - loss: 283822.7286 - accuracy: 0.0000e+00 - val_loss: 244800.0781 - val_accuracy: 0.0000e+00
Epoch 5/10
190/190 [==============================] - 0s 1ms/step - loss: 237929.9574 - accuracy: 0.0000e+00 - val_loss: 229743.7344 - val_accuracy: 0.0000e+00
Epoch 6/10
190/190 [==============================] - 0s 1ms/step - loss: 231214.5991 - accuracy: 0.0000e+00 - val_loss: 227071.5000 - val_accuracy: 0.0000e+00
Epoch 7/10
190/190 [==============================] - 0s 768us/step - loss: 226968.6093 - accuracy: 0.0000e+00 - val_loss: 226674.8750 - val_accuracy: 0.0000e+00
Epoch 8/10
190/190 [==============================] - 0s 801us/step - loss: 222549.2651 - accuracy: 0.0000e+00 - val_loss: 226619.8594 - val_accuracy: 0.0000e+00
Epoch 9/10
190/190 [==============================] - 0s 723us/step - loss: 222347.7808 - accuracy: 0.0000e+00 - val_loss: 226613.0625 - val_accuracy: 0.0000e+00
Epoch 10/10
190/190 [==============================] - 0s 804us/step - loss: 224605.4963 - accuracy: 0.0000e+00 - val_loss: 226612.3125 - val_accuracy: 0.0000e+00
Out[8]: <tensorflow.python.keras.callbacks.History at 0x7fef493eeeb0>


As you can see, the accuracy stayed too low and when the predict was ran, the results were too off:


test_array=np.array([4,27,100,121,9])
print(model.predict([test_array]))
[[  0.        23.93601   24.26838   24.139872  24.057058  24.081354
   24.19092    0.         0.        24.016228]
 [  0.       138.48897  138.79132  138.67838  138.5163   138.63455
  138.70287    0.         0.       138.47495 ]
 [  0.       502.07007  502.27725  502.21365  501.8      502.21646
  502.15384    0.         0.       501.75702 ]
 [  0.       606.662    606.8417   606.7923   606.3062   606.80853
  606.7082     0.         0.       606.26276 ]
 [  0.        48.838825  49.16467   49.039547  48.939503  48.984222
   49.08482    0.         0.        48.89856 ]]

The idea should get:

  • input: 4 -> result: 20
  • input: 27 -> result: 135
  • input: 100 -> result: 500
  • input: 121 -> result: 605
  • input: 9 -> result: 45

Like the function f(x) = x * 5.

Should I look for another activation-function instead of relu?

Petter_M
  • 435
  • 3
  • 10
  • 20

0 Answers0