0

I have 1000 datasets, each of them consists 8000 amplitudes of signal and a label - the fundamental frequency of this signal. What is the best approach to build a neural network to predict fundamental frequency for newly provided signal?

For example:
Fundamental freq: 75.88206932 Hz
Snippet of data:

 -9.609272558949627507e-02
 -4.778297441391140543e-01
 -2.434520972570237696e-01
 -1.567176020112603263e+00
 -1.020037056101358752e+00
 -1.129608807811322446e+00
  4.303651786855859918e-01
 -3.936956061582048694e-01
 -1.224883726737033163e+00
 -1.776803300708089672e+00

The model I've created: (the training set shape: (600,8000,1))

  model=Sequential() 
  model.add(Conv1D(filters=64, kernel_size=3, activation='tanh', \
                    input_shape=(data.shape[1],data.shape[2]))) 
  model.add(MaxPooling1D(pool_size=2)) 
  model.add(Conv1D(filters=64, kernel_size=3, activation='tanh')) 
  model.add(MaxPooling1D(pool_size=2)) 
  model.add(Conv1D(filters=64, kernel_size=3, activation='tanh')) 
  model.add(MaxPooling1D(pool_size=2)) 
  model.add(Flatten())
  model.add(Dense(500, activation='tanh'))
  model.add(Dropout(0.2))
  model.add(Dense(50, activation='tanh'))
  model.add(Dropout(0.2))
  model.add(Dense(1, activation='linear')) 

  model.compile(loss='mean_squared_error', optimizer='adam', metrics=["accuracy"])

But the model doesn't want to train. Accuracy ~ 0.0.
I do appreciate any advice.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Alexandra
  • 5
  • 1
  • 3

2 Answers2

0

You might first FFT the data, either with or without a window, and then use the FFT magnitude vectors as ML training data vectors.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
0

What is the best approach to build a neural network to predict fundamental frequency for newly provided signal?

That is a way too-broad question for SO, and consequently you should not really expect any sufficiently detailed meaningful answer.

That said, there are certain issues with your code, and rectifying them will arguably move you a step closer to achieving your end goal.

So, you are making a very fundamental mistake:

Accuracy is suitable only for classification problems; for regression (i.e. numeric prediction) ones, such as yours, accuracy is meaningless.

What's more, the fact is that Keras unfortunately will not "protect" you or any other user from putting such meaningless requests in your code, i.e. you will not get any error, or even a warning, that you are attempting something that does not make sense, such as requesting the accuracy in a regression setting; see my answer in What function defines accuracy in Keras when the loss is mean squared error (MSE)? for more details and a practical demonstration.

So, here your performance metric is actually the same as your loss, i.e. the Mean Squared Error (MSE); you should go for making this quantity in your validation set as small as possible, and remove completely the metrics=['accuracy'] argument from the compilation of your model.

Additionally, nowadays we practically never use tanh activation for the hidden layers; you should try relu instead.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • Your answer made me aware that I have to go back to foundations, thank you. But how will I know if my model is good enough? For example in classification problem accuracy is in range [0,1], where 1.0 is a perfect accuracy, in regression problem I can use RMSE as a metric, the perfect one is equal to 0.0 of course, but it has no upper limit. So how cope with that? – Alexandra Mar 15 '19 at 09:51
  • @Alexandra you are very welcome. This is indeed a concern with regression models, and there is not ready-made approach. Plotting the predictions vs actual values, as well as calculating any *practical* costs that incur from the discrepancies are the very first approaches... – desertnaut Mar 15 '19 at 10:30