I would like to use the function tf.nn.conv2d()
on a single image example, but the TensorFlow documentation seems to only mention applying this transformation to a batch of images.
The docs mention that the input image must be of shape [batch, in_height, in_width, in_channels]
and the kernel must be of shape [filter_height, filter_width, in_channels, out_channels]
. However, what is the most straightforward way to achieve 2D convolution with input shape [in_height, in_width, in_channels]
?
Here is an example of the current approach, where img
has shape (height, width, channels):
img = tf.random_uniform((10,10,3)) # a single image
img = tf.nn.conv2d([img], kernel)[0] # creating a batch of 1, then indexing the single example
I am reshaping the input as follows:
[in_height, in_width, in_channels]->[1, in_height, in_width, in_channels]->[in_height, in_width, in_channels]
This feels like an unnecessary and costly operation when I am only interested in transforming one example.
Is there a simple/standard way to do this that doesn't involve reshaping?