How to test the way keras pads the sequences:
A very simple test you can do is to create a model with a single convolutional layer, enforce its weights to be 1 and its biases to be 0, and give it an input with ones to see the output:
from keras.layers import *
from keras.models import Model
import numpy as np
#creating the model
inp = Input((8,1))
out = Conv1D(filters=1,kernel_size=4,padding='same')(inp)
model = Model(inp,out)
#adjusting the weights
ws = model.layers[1].get_weights()
ws[0] = np.ones(ws[0].shape) #weights
ws[1] = np.zeros(ws[1].shape) #biases
model.layers[1].set_weights(ws)
#predicting the result for a sequence with 8 elements
testData=np.ones((1,8,1))
print(model.predict(testData))
The output of this code is:
[[[ 2.] #a result 2 shows only 2 of the 4 kernel frames were activated
[ 3.] #a result 3 shows only 3 of the 4 kernel frames were activated
[ 4.] #a result 4 shows the full kernel was used
[ 4.]
[ 4.]
[ 4.]
[ 4.]
[ 3.]]]
So we can conclude that:
- Keras adds the padding before performing the convolutions, not after. So the results are not "zero".
- Keras distributes the padding equally, and when there is an odd number, it goes first.
So, it made the input data look like this before applying the convolutions
[0,0,1,1,1,1,1,1,1,1,0]