0

According to Tensorflow's official website, Tensorflow functions use GPU computation by default.

"If a TensorFlow operation has both CPU and GPU implementations, the GPU devices will be given priority when the operation is assigned to a device."

I'm training a dynamic rnn with 3 layers of LSTM cells. But when monitoring the GPU usage, I found the GPU load is 0%.

enter image description here

My GPU is a Nvidia GTX 960 M. Details are

enter image description here

I googled a lot but still found nothing. I'm pretty sure I installed the gpu supported version of Tensorflow, and it's updated. I'm wondering if there's no GPU implementation for dynamic_rnn or LSTMCell? Is there any way to implement it?

Thanks.

My code:

import numpy as np
import tensorflow as tf
import operator

lr = 0.001
seq_length=  50
char_per_iter = 5
n_inputs = 97
n_hidden_units = 700 
n_layers = 3
keep_prob = tf.placeholder(tf.float32)
seq_length_holder = tf.placeholder(tf.int32)

str = open('C:/Users/david_000/workspace/RNN/text_generator/PandP.txt', 'r', encoding = 'utf8').read()
str = str[667:]
str = str.replace("\n", " ")
str = str.replace("‘", "'")
str = str.replace("’", "'")
str = str.replace("“", '"')
str = str.replace("”", '"')

# check if all char's are ascii 
uni = "".join(set(str))
for ch in uni:
    if ord(ch) >= 128:
        print(ch) 

x = []
for i in range(len(str)//char_per_iter):
    x.append(str[i*char_per_iter : i*char_per_iter + seq_length])

def oneHot(char):
    asc = ord(char)
    asc -= 31
    if asc < 97:
        return np.eye(97)[asc].reshape(1,1,-1)
    else:
        return np.eye(97)[0].reshape(1,1,-1)

def getOneHot(seq):
    out = []
    for char in seq:
        out.append(oneHot(char))

    return np.array(out).reshape(1, len(seq), n_inputs)

'''
 RNN
'''

# tf Graph input
xs = tf.placeholder(tf.float32, [1, None, n_inputs])
ys = tf.placeholder(tf.float32, [1, None, n_inputs])

def lstm_cell():        
    cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units)
    cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob = keep_prob)
    return cell

def RNN(X):

    lstm_stacked = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(n_layers)]) 
    outputs, final_state = tf.nn.dynamic_rnn(lstm_stacked, X, dtype=tf.float32)
    output = tf.layers.dense(outputs, n_inputs, activation=tf.nn.softmax)
    output = tf.reshape(output, [-1, seq_length_holder, n_inputs])

    return output


pred = RNN(xs)
cost = tf.losses.sigmoid_cross_entropy(ys, pred)
optimizer = tf.train.AdamOptimizer(lr)
train_step = optimizer.minimize(cost)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)

init = tf.global_variables_initializer()
sess.run(init)

for i in range(47, len(x)):
    sess.run(train_step, feed_dict = {xs : getOneHot(x[i]), ys : getOneHot(x[i+1]), keep_prob : 0.7, seq_length_holder : seq_length})
    if i % 10 == 0:
        print(i)

Update

Now I know the problem is Tensorlow can't find my GPU. When running

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

It gives:

Device mapping: no known devices.

I followed this tutorial https://nitishmutha.github.io/tensorflow/2017/01/22/TensorFlow-with-gpu-for-windows.html step by step, except for Anaconda since I installed tensorflow with pip3. It doesn't work for me.

Update

I know why it doesn't work.

For sime reason, my python is using the CPU-only version of tensorflow though I have both. I uninstalled the CPU version and installed tensorflow-gpu again. Now it gives error:

No module named '_pywrap_tensorflow_internal'

I know it's related to the installation and there're a lot more discussions like:

On Windows, running "import tensorflow" generates No module named "_pywrap_tensorflow" error

Cannot import Tensorflow for GPU on Windows 10

Keep working on it.

David
  • 819
  • 1
  • 11
  • 14
  • Do you have the CUDA toolkit installed? https://www.tensorflow.org/install/install_windows#requirements_to_run_tensorflow_with_gpu_support – Peter Gibson Jul 24 '17 at 03:38
  • Yes, I followed this tutorial https://nitishmutha.github.io/tensorflow/2017/01/22/TensorFlow-with-gpu-for-windows.html But when I run: sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) it says "Device mapping: no known devices." – David Jul 24 '17 at 04:58
  • Do you have the `tensorflow-gpu` python package installed? – Imran Jul 24 '17 at 05:33
  • Yes, Imran. I double checked. I installed with "pip3 install --upgrade tensorflow-gpu" – David Jul 24 '17 at 05:46
  • I noticed something. I uninstalled Tensorflow, and install tensorflow-gpu. Then python can't find tensorflow when importing: import tensorflow as tf. In the lib>>site-package directory, there's tensorflow_gpu module but no tensorflow module. – David Jul 24 '17 at 05:54
  • compile your own version tensorflow. It's quite easy and you will train your model much faster even if you run your code with cpu. – Henry Sou Jun 06 '19 at 08:03

0 Answers0