My question is related to this question*.
Is it possible to transform standard tensorflow layers into 'cells', to be used together with RNN cells to compose recurrent neural networks?
So, the new 'cell' should store the parameters (weights, ...), and be able to be called on varying inputs. Something like this:
from tf.nn import batch_normalization, conv2d
from tf.contrib.rnn import MultiRNNCell, LSTMCell
bn_cell = cell_creation_fun(batch_normalization, otherparams) # batch norm cell
conv_cell = cell_creation_fun(conv2d, otherparams ) # non-rnn conv cell
# or `conv_cell = cell_creation_fun(tf.layers.Conv2D, otherparams )` # using tf.layers
So that they can be used like this:
multi_cell = MultiRNNCell([LSTMCell(...), conv_cell, bn_cell])
Or like this:
h = ...
conv_h, _ = conv_cell(h, state=None)
normed_h, _ = bn_cell(h, state=None)
The only thing I could think of is manually writing such a 'cell' for every layer I want to use, subclassing RNNCell. But it doesn't seem straightforward to use existing functions like Conv2D without being able to pass an ´input´ parameter during creation. (Will post code when I manage.)
* Maybe asking in a more targeted way has a chance of an answer.