3

inception layer

I am attempting to replicate this image from a research paper. In the image, the orange arrow indicates a shortcut using residual learning and the layer outlined in red indicates a dilated convolution.

In the code below, r5 is the result of the relu seen in the image. I have excluded the code between the relu and the dilation layer for simplicity. In tensorflow, how would I properly combine the result of the relu and dilated convolution to execute the residual shortcut?

#relu layer
r5 = tf.nn.relu(layer5)
...
#dilation layer
h_conv4 = conv3d_dilation(concat1, 1154)
Devin Haslam
  • 747
  • 2
  • 12
  • 34

1 Answers1

3

The image is quite straight forward - it says you should add them, so:

#relu layer
r5 = tf.nn.relu(layer5)
...
#dilation layer
h_conv4 = conv3d_dilation(concat1, 1154)

#combined
combined = r5 + h_conv4
lejlot
  • 64,777
  • 8
  • 131
  • 164
  • If you had a moment, I had a second question. This is the architecture I am attempting to replicate: http://i.stack.imgur.com/1qLP2.png From this information, is there any way to tell the number of output channels for the deconvolutions? – Devin Haslam Oct 30 '17 at 21:29
  • It depends on the shape of `r5`, it's fairly possible that it needs a projection - https://stackoverflow.com/q/46121283/712995 – Maxim Oct 31 '17 at 07:25
  • @lejlot This is a 3d image, so P is actually referring to the 3rd dimension not number of channels – Devin Haslam Oct 31 '17 at 20:03