2

I'm running through a TensorFlow tutorial from the packtpub video series. Unfortunately it appears the base RNN in the tutorial no longer works, or something weird is happening. Any insights?

Here is the error I am receiving:

ValueError: Variable RNN/BasicRNNCell/Linear/Matrix already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "<ipython-input-23-dcf4ba3c6842>", line 16, in <module>
    outputs, states = tf.nn.dynamic_rnn(cell, x_, dtype = tf.float32, initial_state = None)
  File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2869, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2809, in run_ast_nodes
    if self.run_code(code, result):

The error appears to indicate that a matrix or something

Here is the code it is referencing

import requests
import numpy as np
import math
import tensorflow as tf
import datetime
from tqdm import tqdm

dataUrl = "https://drcdata.blob.core.windows.net/data/weather.npz"
response = requests.get(dataUrl)
with open("weather.zip", "wb") as code:
    code.write(response.content)
#load into np array
data = np.load("weather.zip")  
daily = data['daily']
weekly = data['weekly']

More Code

num_weeks = len(weekly)
dates = np.array([datetime.datetime.strptime(str(int(d)), '%Y%m%d') for d in weekly[:,0]])
def assign_season(date):
    month = date.month
    #spring = 0
    if 3 <= month < 6:
        season = 0
    #summer = 1
    elif 6 <= month < 9:
        season = 1
    elif 9 <= month < 12:
        season = 2
    elif month == 12 or month < 3:
        season = 3
    return season

MORE CODE

num_classes = 4
num_inputs = 5
#Historical state for RNN size
state_size = 11

labels = np.zeros([num_weeks, num_classes])
#read and convert to one-hot
for i,d in enumerate(dates):
    labels[i,assign_season(d)] = 1

#extract and scale training data
train = weekly[:,1:]
train = train - np.average(train,axis=0)
train = train / train.std(axis = 0)

sess = tf.InteractiveSession()

#Inputs
x = tf.placeholder(tf.float32, [None, num_inputs])

#Special RNN TF Input Shape
x_ = tf.reshape(x, [1, num_weeks, num_inputs])

#Define the labels
y_ = tf.placeholder(tf.float32, [None, num_classes])

#Define RNN Cell
#RNN's method for looking back in time.
cell = tf.nn.rnn_cell.BasicRNNCell(state_size)
#Intelligently handles recursion instead of unrolling full computation.
outputs, states = tf.nn.dynamic_rnn(cell, x_, dtype = tf.float32, initial_state = None)

#Define Weights and Biases
W1 = tf.Variable(tf.truncated_normal([state_size, num_classes], stddev = 1.0 / math.sqrt(num_inputs)))
b1 = tf.Variable(tf.constant(0.1, shape = [num_classes]))

#reshape output for normal usage
#h1 = tf.reshape(outputs, [-1, state_size])

#softmax output, remember, its a classifier
y = tf.nn.softmax(tf.matmul(h1, W1) + b1)

TRAIN IT CODE

sess.run(tf.initialize_all_variables())

#Define Cost Function
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y + 1e-50, y_))

#define train step
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

#Define Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

#Really train this thing.
epochs = 500
train_acc = np.zeros(epochs//10)
test_acc = np.zeros(epochs//10)

for i in tqdm(range(epochs), ascii=True):
    if i % 10 == 0: #record for learning curve display
        A = accuracy.eval(feed_dict={x: train, y_: labels})
        train_acc[i//10] = A
    train_step.run(feed_dict={x: train, y_:labels})

PLOT SOME STUFF

 %matplotlib inline
import matplotlib.pyplot as plt
plt.plot(train_acc)
noɥʇʎԀʎzɐɹƆ
  • 9,967
  • 2
  • 50
  • 67
David Crook
  • 2,722
  • 3
  • 23
  • 49

1 Answers1

0

Try either clearing the default graph or resetting the graph (see Remove nodes from graph or reset entire default graph). I had the same error after declaring my graph with

with tf.Session() as sess:

and resetting the default graph solved the problem for me. My guess is that iPython Notebook keeps the graph state the same in between calls to notebook cells, whereas when the problem is run as a script the graph is cleared after every run.

Community
  • 1
  • 1
liangjy
  • 169
  • 3