A Gated Recurrent Unit (GRU) is a type of unit in a recurrent neural network.
Questions tagged [gated-recurrent-unit]
80 questions
20
votes
1 answer
ValueError: The two structures don't have the same number of elements
with tf.variable_scope('forward'):
cell_img_fwd = tf.nn.rnn_cell.GRUCell(hidden_state_size, hidden_state_size)
img_init_state_fwd = rnn_img_mapped[:, 0, :]
img_init_state_fwd = tf.multiply(
img_init_state_fwd,
…

user3640928
- 231
- 1
- 2
- 7
13
votes
1 answer
Tensorflow Serving - Stateful LSTM
Is there a canonical way to maintain a stateful LSTM, etc. with Tensorflow Serving?
Using the Tensorflow API directly this is straightforward - but I'm not certain how best to accomplish persisting LSTM state between calls after exporting the model…

Asher Newcomer
- 201
- 1
- 7
10
votes
2 answers
In tensorflow, how to iterate over a sequence of inputs stored in a tensor?
I am trying RNN on a variable length multivariate sequence classification problem.
I have defined following function to get the output of the sequence (i.e. the output of RNN cell after the final input from sequence is fed)
def…

exAres
- 4,806
- 16
- 53
- 95
9
votes
3 answers
Mixing feed forward layers and recurrent layers in Tensorflow?
Has anyone been able to mix feedforward layers and recurrent layers in Tensorflow?
For example:
input->conv->GRU->linear->output
I can imagine one can define his own cell with feedforward layers and no state which can then be stacked using the…

Fiorentino
- 101
- 6
8
votes
2 answers
calculating the number of parameters of a GRU layer (Keras)
Why the number of parameters of the GRU layer is 9600?
Shouldn't it be ((16+32)*32 + 32) * 3 * 2 = 9,408 ?
or, rearranging,
32*(16 + 32 + 1)*3*2 = 9408
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=4500, output_dim=16,…

Abid Orucov
- 93
- 1
- 4
8
votes
1 answer
Order of layers in hidden states in PyTorch GRU return
This is the API I am looking at, https://pytorch.org/docs/stable/nn.html#gru
It outputs:
output of shape (seq_len, batch, num_directions * hidden_size)
h_n of shape (num_layers * num_directions, batch, hidden_size)
For GRU with more than one…

zyxue
- 7,904
- 5
- 48
- 74
7
votes
3 answers
Reset parameters of a neural network in pytorch
I have a neural network with the following structure:
class myNetwork(nn.Module):
def __init__(self):
super(myNetwork, self).__init__()
self.bigru = nn.GRU(input_size=2, hidden_size=100, batch_first=True, bidirectional=True)
…

learner
- 3,168
- 3
- 18
- 35
6
votes
1 answer
How do I set the initial state of a keras.layers.RNN instance?
I have created a stacked keras decoder model using the following loop:
# Create the encoder
# Define an input sequence.
encoder_inputs = keras.layers.Input(shape=(None, num_input_features))
# Create a list of RNN Cells, these are then concatenated…

Aesir
- 2,033
- 1
- 28
- 39
5
votes
1 answer
Implementing Seq2Seq with GRU in Keras
I implanted the ten-minutes LSTM example from the Keras site and adjusted the network to handle word embeddings instead of character ones (from https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html). It…

Ridvan Aydin Sibic
- 77
- 5
5
votes
1 answer
How can I improve the classification accuracy of LSTM,GRU recurrent neural networks
Binary Classification Problem in Tensorflow:
I have gone through the online tutorials and trying to apply it on a real-time problem using gated-recurrent unit (GRU). I have tried all the possibilities which I know to improve the classification.
1)…

OJJ
- 53
- 1
- 4
5
votes
1 answer
Explanation of GRU cell in Tensorflow?
Following code of Tensorflow's GRUCell unit shows typical operations to get a updated hidden state, when previous hidden state is provided along with current input in the sequence.
def __call__(self, inputs, state, scope=None):
"""Gated…

exAres
- 4,806
- 16
- 53
- 95
4
votes
1 answer
Finding TensorFlow equivalent of Pytorch GRU feature
I am confused about how to reconstruct the following Pytorch code in TensorFlow. It uses both the input size x and the hidden size h to create a GRU layer
import torch
torch.nn.GRU(64, 64*2, batch_first=True, return_state=True)
Instinctively, I…

tensornerd
- 41
- 2
4
votes
1 answer
How to get final hidden state of bidirectional 2-layers GRU in pytorch
I am struggling with understanding how to get hidden layers and concatenate them.
I am using the following code as an example:
class classifier(nn.Module):
#define all the layers used in model
def __init__(self, vocab_size, embedding_dim,…

Abdul Wahab
- 137
- 2
- 11
4
votes
2 answers
Keras - GRU layer with recurrent dropout - loss: 'nan', accuracy: 0
Problem description
I am going through "Deep Learning in Python" by François Chollet (publisher webpage, notebooks on github). Replicating examples from Chapter 6 I encountered problems with (I believe) GRU layer with recurrent dropout.
The code in…

user351437
- 61
- 6
4
votes
1 answer
Understanding GRU Architecture - Keras
I am using the Mycroft AI wake word detection and I am trying to understand the dimensions of the network. The following lines show the model in Keras:
model = Sequential()
model.add(GRU(
params.recurrent_units, activation='linear',
…

kleka
- 364
- 3
- 14