1

I am trying to follow this SO post on how the params are calculated for each layer, can anyone give me a tip?

Here is the output of my model.summary():

enter image description here

This is the model:

model = Sequential()
model.add(Dense(60, input_dim=44, kernel_initializer='normal', activation='relu'))
model.add(Dense(55, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, kernel_initializer='normal', activation='relu'))
model.add(Dense(45, kernel_initializer='normal', activation='relu'))
model.add(Dense(30, kernel_initializer='normal', activation='relu'))
model.add(Dense(20, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
desertnaut
  • 57,590
  • 26
  • 140
  • 166
bbartling
  • 3,288
  • 9
  • 43
  • 88

1 Answers1

1

For MLPs, the equation is:

(previous_layer_nodes + 1) * (layer_nodes)

where +1 stands for the bias.

For the input layer, the number of nodes of the previous layer is the input_dim, since the input is actually an implicit layer.

So, in your case:

dense   : (44+1)*60 = 2700
dense_1 : (60+1)*55 = 3355
dense_2 : (55+1)*50 = 2800
dense_3 : (50+1)*45 = 2295
dense_4 : (45+1)*30 = 1380
dense_5 : (30+1)*20 = 620
dense_6 : (20+1)*1  = 21
desertnaut
  • 57,590
  • 26
  • 140
  • 166
  • the params can you tell me a little bit what this means? So my model architecture is 6 layers, starting with a model width of 60 neurons. What do the params mean? Thanks – bbartling Apr 03 '20 at 14:34
  • @HenryHub As hinted (& linked), the model actually starts with an (implicit) input layer; this has `input_dim=44` nodes + 1 bias, and it is connected to the 60-node layer. In turn, the 60-node layer + 1 bias of its own is connected to the next, 55-node layer etc. Between each of the nodes of subsequent layers (including the bias nodes) there is a weight parameter – desertnaut Apr 03 '20 at 14:39