Keras uses TensorFlow implementation of padding. All the details are available in the documentation here
First, consider the 'SAME' padding scheme. A detailed explanation of
the reasoning behind it is given in these notes. Here, we summarize
the mechanics of this padding scheme. When using 'SAME', the output
height and width are computed as:
out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))
The total padding applied along the height and width is computed as:
if (in_height % strides[1] == 0):
pad_along_height = max(filter_height - strides[1], 0)
else:
pad_along_height = max(filter_height - (in_height % strides[1]), 0)
if (in_width % strides[2] == 0):
pad_along_width = max(filter_width - strides[2], 0)
else:
pad_along_width = max(filter_width - (in_width % strides[2]), 0)
Finally, the padding on the top, bottom, left and right are:
pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left
Note that the division by 2 means that there might be cases when the
padding on both sides (top vs bottom, right vs left) are off by one.
In this case, the bottom and right sides always get the one additional
padded pixel. For example, when pad_along_height is 5, we pad 2 pixels
at the top and 3 pixels at the bottom. Note that this is different
from existing libraries such as cuDNN and Caffe, which explicitly
specify the number of padded pixels and always pad the same number of
pixels on both sides.
For the 'VALID' scheme, the output height and width are computed as:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
and no padding is used.