0

For tf.layers.conv2d, I noticed that padding="VALID" sometimes does padding lengths with negative values.

This page in the docs says that with "VALID" padding, the padding length is calculated as follows:

out_height = math.ceil(float(in_height - filter_height + 1) / float(strides[1]))
pad_along_height = ((out_height - 1) * strides[1] +
                    filter_height - in_height)

If you use these values, for example:

in_height = 150
filter_height = 7
strides = (1, 4, 4, 1)

Then you get pad_along_height == -3. Why would tensorflow sometimes choose negative padding by default? This seems weird to me. You lose information about the previous layer. Shouldn't "VALID" padding be the minimum amount of padding to retain all the information from the previous layer? Rather than losing 3 rows, (and getting an output height of 36), I think I would've preferred padding with 1 row and getting an output height of 37.

Why did Google implement it this way? It's not really "padding" is it? More like "truncating". It just doesn't make sense to me.

EDIT: Note - In this case "SAME" will produce pad_along_height == 5 for an output height of 38.

Andy Carlson
  • 3,633
  • 24
  • 43
  • 1
    For the 'VALID' padding, the padding values are always zero. quote from [this page in the docs](). I'm afraid that you misunderstood something... – LI Xuhong Jan 03 '18 at 20:18
  • When I read "padding values are always zero," I interpret it as "the value we insert for the padding is zeros." Shouldn't "padding values" refer to the value being used to pad the array, rather than the length of the padding? (I guess not...) – Andy Carlson Jan 03 '18 at 21:00
  • Nonetheless, I do understand how it works. What I'm *really* asking is **why**. Why did they chose to implement it this way? Why would someone desire this behavior over the alternative I've suggested above? – Andy Carlson Jan 03 '18 at 21:04
  • 1
    This post seem to provide a good explanation: https://stackoverflow.com/questions/37674306/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t but not sure if it would statisfy you. You either need to pad or crop for which the API offers different modes (VALID and SAME). – de1 Jan 03 '18 at 21:32
  • @Andy Carison Sorry, it may be me who misunderstood. but the `VALID` does not use padding, see [the current docs](https://www.tensorflow.org/api_guides/python/nn#Convolution). By the way, the link in @de1 comment is very clear to explain these two padding schemes. probably you have already understood. Note that you can always do the padding by yourself, using `tf.pad()`, if you don't like those two schemes in tensorflow. – LI Xuhong Jan 03 '18 at 22:40

0 Answers0