Questions tagged [nvidia-titan]
11 questions
369
votes
16 answers
How to prevent tensorflow from allocating the totality of a GPU memory?
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.
For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to…

Fabien C.
- 3,845
- 3
- 13
- 6
7
votes
1 answer
Training TensorFlow model with summary operations is much slower than without summary operations
I am training an Inception-like model using TensorFlow r1.0 with GPU Nvidia Titan X.
I added some summary operations to visualize the training procedure, using code as follows:
def variable_summaries(var):
"""Attach a lot of summaries to a Tensor…

Da Tong
- 2,018
- 18
- 25
3
votes
1 answer
how to decide test batch size to fully utilise NVIDIA Titan X
When training deep learning model, I found that GPU is not fully utilise if I set the train and validate(test) batch size to be same, say 32, 64, ..., 512.
Then I check NVIDIA Titan X specifications:
NVIDIA CUDA® Cores: 3584
Memory: 12 GB…

user2262504
- 7,057
- 5
- 18
- 23
2
votes
0 answers
Tensorflow 1.8 on Titan X: CUDA_ERROR_INVALID_DEVICE
I have a ubuntu 16.04 installation with 2 nvidia GPUs:
GPU 0: GeForce GT 610 (UUID: GPU-710e856e-358f-7b7d-95b7-e4eae7037c1f)
GPU 1: GeForce GTX TITAN X (UUID: GPU-5eacd6f3-f9e4-5795-c75c-26e34ced55ce)
nvidia-smi outputs:
Sun Jun 10 17:21:47 2018 …

v-i-s-h
- 448
- 1
- 3
- 12
2
votes
1 answer
cl::Image3D segfaults on nVidia TITAN black but not Intel openCL device?
All,
I have the following lines of code for setting up a 3D image in OpenCL:
const size_t NPOLYORDERS = 16;
const size_t NPOLYBINS = 1024;
cl::Image3D my3DImage;
cl::ImageFormat imFormat(CL_R, CL_FLOAT);
my3Dimage = cl::Image3D(clContext,…

stix
- 1,140
- 13
- 36
2
votes
2 answers
Titan Z vs K40 processor?
I'm using GPUs for scientific computing. Recently Nvidia released its flagship product GeForce Titan Z. I would like to know, how this processor fairs against Tesla K40 (another NVIDIA product). I have already checked the specs but keen to know of…

Sakthi K
- 169
- 2
- 15
1
vote
1 answer
While using TensorFlow 2.0.0: Error: device CUDA:0 not supported by XLA service while setting up XLA_GPU_JIT device number 0
I'm trying to run a CuDNNLSTM layer on Tesla V100-SXM2 GPU, but error appears due to TensorFlow-gpu 2.0.0 installed (can not downgrade because is a shared server).
ConfigProto options are deprecated at tf 2.0.0, so previous threads like this does…

Periko
- 21
- 4
1
vote
1 answer
Does the nVidia Titan V support GPUDirect?
I was wondering if someone might be able to help me figure out if the new Titan V from nVidia support GPUDirect. As far as I can tell it seems limited to Tesla and Quadro cards.
Thank you for taking the time to read this.

kuwze
- 411
- 4
- 13
1
vote
1 answer
cudaError_t 1 : "__global__ function call is not configured" returned from 'cublasCreate(&handle_)'
I run ASR experiment using Kaldi on SGE cluster consisting of two workstation with TITAN XP.
And randomly I meet the following problem:
ERROR (nnet3-train[5.2.62~4-a2342]:FinalizeActiveGpu():cu-device.cc:217) cudaError_t 1 : "__global__ function…

haibing cao
- 11
- 2
1
vote
0 answers
GPU Nvidia-Titan X takes too much time to train my network. Works fine with tf cnn-benchmarks
My code is pasted below:
#-------NETWORK 1---------------
network1 = Sequential()
#Dense layers - 1st param is output
network1.add(Dense(2048, input_shape=(8500,),name="dense_one"))
network1.add(Dense(2048,activation='sigmoid',name =…

deeplearning
- 459
- 4
- 15
1
vote
0 answers
Nvidia Titan X (Pascal) Tensorflow Windows 10
My Operating System is Windows 10 and I am using Keras with Tensorflow backend on CPU. I want to buy the "Nvidia Titan x (Pascal)" GPU as it is recommended for tensorflow on Nvidia…

Bruce
- 11
- 2