0

Let's say I want to code this basic Neural Network Structure in Keras which has 10 units in Input Layer and 3 units in Output layer.

this

Now if I am using Keras, and give input_shape of more then 10, how it will adjust in it.

from tensorflow.keras.models import Sequential 
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(10, activation = 'relu', input_shape = (64,)))
model.add(Dense(3, activation = 'sigmoid'))

model.summary()

You see, here input_shape is of size 64, but how will it adjust in model whose first layer has 10 units because for what I have learned that size of input shape/vector should be equal to number of units in the input layer.

Or Am I not implementing this neural network right?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Ahmad Anis
  • 2,322
  • 4
  • 25
  • 54
  • 1
    You have 2-layer network implemented. First layer has weight matrix with shape 64x10, and second one 10x3. On the picture you have 1-layer network with 10 inputs (not 64) and 3 outputs. This corresponds to your second layer. – Slowpoke Jun 10 '20 at 12:06
  • 1
    @Slowpoke this is not correct; code shown corresponds to a **3-layer** network, since there is an implicit input layer here - see [Keras Sequential model input layer](https://stackoverflow.com/questions/46572674/keras-sequential-model-input-layer) – desertnaut Jun 10 '20 at 15:52
  • @desertnaut The question on my opinion was about not understanding what is fully connected layer itself, how it is represented by matrix and what number of inputs and outputs mean. Input "layer" converting inputs to tensors is rather keras technical stuff. We can, of course, draw it on the diagram as two columns of points with 1-to-1 connection, but there's no reason to do it – Slowpoke Jun 10 '20 at 16:23
  • @Slowpoke my objection is specifically on your statement "*You have 2-layer network implemented*", which, as I already said, is not correct. OP has implemented a 3-layer net. I have said nothing regarding what the question is about in my opinion etc – desertnaut Jun 10 '20 at 16:27
  • 1
    @desertnaut The link you provided was helpful for me to understand. – Ahmad Anis Jun 11 '20 at 02:37

2 Answers2

0

That would not be a problem. The weight matrix of shape (10,64) would be used in input layer. your input has shape 64 and first hidden layer has 10 units giving a output of 3 units. Seems fine to me.

But your input layer itself is 64. So what you are getting is a 3-layer network with a hidden layer of 10 units.

0

If the shape of your input vector is 64, then you really need to have an input layer with size 64. The input layer of a neural network doesn't perform any computations. It just passes the inputs forward to the first hidden layer. This one, on the other hand, performs the computations for all neurons contained in it (linear combination of input vector and weights, later served as an input to the activation function, which is the ReLU in your case).

In your code, you are building a neural net with 64 input neurons (which again don't perform any computations), 10 neurons in the first (and only) hidden layer and 3 neurons in the output layer.

desertnaut
  • 57,590
  • 26
  • 140
  • 166