0
xss=StandardScaler()
yss=StandardScaler()

 dataset=pd.read_csv('primes.csv')
 x_train=dataset["x"]
 x_train=x_train[0:5400]
 y_train=dataset["y"]
 y_train=y_train[0:5400]
 x_test=dataset["x"]
 x_test=x_test[5400:]
 y_test=dataset["y"]
 y_test=y_test[5400:]
 x_train=[x_train]
 y_train=[y_train]
 x_train=xss.fit_transform(x_train)
 y_train=yss.fit_transform(y_train)
 x_train = np.asarray(x_train).astype('float32')
 y_train = np.asarray(y_train).astype('float32')



 model=Sequential()
 model.add(Dense(1024,activation="relu"))
 model.add(Dropout(0.01))
 model.add(Dense(128,activation="relu"))
 model.add(Dropout(0.01))
 model.add(Dense(24,activation="relu"))
 model.add(Dense(1,activation="linear"))
 optimizer=tf.keras.optimizers.Adam(1.5e-2,0.5)
 model.compile(optimizer = optimizer, loss = 'mse', metrics = ['mean_absolute_error'])
 model.fit(x_train,y_train,epochs=10,batch_size=128)

I want my output to be between 0 to 100000 but it only outputs 0 at loss and metric after so many iterations.

Epoch 1/10 1/1 [==============================] - 1s 582ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 2/10 1/1 [==============================] - 0s 30ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 3/10 1/1 [==============================] - 0s 25ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 4/10 1/1 [==============================] - 0s 28ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 5/10 1/1 [==============================] - 0s 26ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 6/10 1/1 [==============================] - 0s 26ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 7/10 1/1 [==============================] - 0s 28ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 8/10 1/1 [==============================] - 0s 27ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 9/10 1/1 [==============================] - 0s 28ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00 Epoch 10/10 1/1 [==============================] - 0s 27ms/step - loss: 0.0000e+00 - mean_absolute_error: 0.0000e+00

CSV file that I am using

2 Answers2

0

Try changing the activation function of the last layer to "linear". Since you are trying to perform regression "linear" activation function is suitable. "softmax" is used to do classification as @Nikhil Kumar has mentioned.

Kabilan Mohanraj
  • 1,856
  • 1
  • 7
  • 17
0

You need to use regularization on your data (scale it to values between 0-1). Your model is probably too big (over 15M parameters) for such data. Why do you fitting 5400 data points at once? I don't think that is what you meant to do. If you want to find correlation between x and y, you need to divide your dataset into batches of (x(1), y(1)) not (x(5400), y(5400)).

kacpo1
  • 21
  • 1
  • 2
  • What regularization method (to not detroy the whole idea) would you reccomend? And how should I divide my dataset? – landingburn May 05 '21 at 09:51
  • You need to know what is the maximum value of data you are trying to predict and then divide whole dataset by this value so you get values between 0 and 1. Check some tf tutorials so you would understand how to create datasets. – kacpo1 May 05 '21 at 10:16
  • Updated the code, what do you think is the problem now? – landingburn May 05 '21 at 12:00