3

I have the following neural network in Python/Keras:

input_img = Input(shape=(784,))

encoded = Dense(1000, activation='relu')(input_img)  # L1
encoded = Dense(500, activation='relu')(encoded)     # L2
encoded = Dense(250, activation='relu')(encoded)     # L3
encoded = Dense(2, activation='relu')(encoded)       # L4

decoded = Dense(20, activation='relu')(encoded)      # L5
decoded = Dense(400, activation='relu')(decoded)     # L6
decoded = Dense(100, activation='relu')(decoded)     # L7
decoded = Dense(10, activation='softmax')(decoded)   # L8

mymodel = Model(input_img, decoded)

What I'd like to do is to have one neuron in each of layers 4~7 to be a constant 1 (to implement the bias term), i.e. it has no input, has a fixed value of 1, and is fully connected to the next layer. Is there a simple way to do this? Thanks a lot!

syeh_106
  • 1,007
  • 2
  • 9
  • 15
  • 2
    The bias term is already implemented in dense layers, you just need to set `use_bias=True`. By default it is set to `True` so in your case you are already using a bias term – gionni Jul 29 '17 at 15:56
  • @gionni Thanks a lot! Motivated by your pointer, I also found more details in https://stackoverflow.com/a/42412124/6373997. – syeh_106 Jul 31 '17 at 04:25

1 Answers1

3

You could create constant input tensors:

constant_values = np.ones(shape)
constant = Input(tensor=K.variable(constant_values))

With that said, your use case (bias) sounds like you should simply use use_bias=True which is the default, as noted by @gionni.

Jonas Adler
  • 10,365
  • 5
  • 46
  • 73