0

I have a numpy array of array with shape (273, 168) so 273 sample and every sample has 168 observation.

I want as output 273 array of 24 observations.

Why my code give me a problem of dim differences?

x = np.random.randint(0,1,(273,168))
y = np.random.randint(0,1,(273,24))


model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(7, activation='relu', kernel_size=(3), input_shape=(168,273)))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(24))


model.compile(loss=tf.losses.MeanSquaredError(),
            optimizer=tf.optimizers.Adam(),
            metrics=[tf.metrics.MeanAbsoluteError()])

model.fit(x,y))

model.predcit(x[42])

Anyone can help me?

mat
  • 181
  • 14
  • 1
    Check https://stackoverflow.com/questions/69591717/how-is-the-keras-conv1d-input-specified-i-seem-to-be-lacking-a-dimension/69594400#69594400 – AloneTogether May 27 '22 at 05:46

1 Answers1

1

If you check the error message that you get, you can see that the required number of dimensions is 3, you are passing just 2 (n_samples, n_features), you need to reshape you data using .reshape(n_samples, 1, n_features) your input_shape is also incorrect.

Finally, you need to set padding='same', or your input data runs out of dimensions. (You can try running without setting padding to same and see what error you get)

This code works:

x = np.random.randint(0,1,(273,168)).reshape(273, 1, 168)
y = np.random.randint(0,1,(273,24)).reshape(273, 1, 24)

model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(7, activation='relu', kernel_size=(3), padding='same', input_shape=(1,168)))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(24))


model.compile(loss=tf.losses.MeanSquaredError(),
            optimizer=tf.optimizers.Adam(),
            metrics=[tf.metrics.MeanAbsoluteError()])

model.fit(x,y)
Ach113
  • 1,775
  • 3
  • 18
  • 40
  • Ty can you give me others info about padding? – mat May 31 '22 at 10:14
  • When applying convolution operation to an input (be it a vector, matrix or a tensor), the dimensions of the output get shrunk, this depends on the filter size and stride parameters. Conv layers have `padding` parameter, which can be set to `same`, which pads the output image with zeros to result in original shape. By default padding is set to `valid`, which shrinks the input. – Ach113 May 31 '22 at 12:54