Questions tagged [tesla]

Nvidia Tesla is a brand of GPUs targeting the high performance computing market.

Nvidia Tesla has very high computational power (measured in floating point operations per second or FLOPS) compared to microprocessors. Teslas power some of the world's fastest supercomputers, including Titan at Oak Ridge National Laboratory and Tianhe-1A.

Tesla products primarily used

  • In simulations and in large scale calculations (especially floating-point calculations).
  • For high-end image generation for applications in professional and scientific fields.
  • For password brute-forcing.

.

89 questions
12
votes
2 answers

Why is it faster to transfer data from CPU to GPU rather than GPU to CPU?

I've noticed that transferring data to recent high end GPUs is faster than gathering it back to the CPU. Here are the results using a benchmarking function provided to me by mathworks tech-support running on an older Nvidia K20 and a recent Nvidia…
avgn
  • 982
  • 6
  • 19
9
votes
1 answer

Nvidia Tesla vs 480 for CUDA programming

I am doing research on CUDA programming. i have the option to buy a single NVidia Tesla or buy around 4-5 NVidia 480? what do you recommend?
scatman
  • 14,109
  • 22
  • 70
  • 93
8
votes
1 answer

What does 'Off' mean in the output of nvidia-smi?

I run a tensorflow code in the GPU. The image bellow shows the nvidia-smi info:: I want ask what does 'Off' mean in the output of nvidia-smi? Also what does the ""C"" type means here?? My code run in the GPU or CPU in this situation????
programmer
  • 577
  • 1
  • 9
  • 21
7
votes
3 answers

Mixed precision not enabled with TF1.4 on Tesla V100

I was interested in testing my neural net (an Autoencoder that serves as a generator + a CNN as a discriminator) that uses 3dconv/deconv layers with the new Volta architecture and benefit from the Mixed-Precision training. I compiled the most recent…
6
votes
4 answers

Cannot run CUDA code that queries NVML - error regarding libnvidia-ml.so

Recently a colleague needed to use NVML to query device information, so I downloaded the Tesla development kit 3.304.5 and copied the file nvml.h to /usr/include. To test, I compiled the example code in tdk_3.304.5/nvml/example and it worked…
Brian R
  • 785
  • 1
  • 6
  • 13
5
votes
2 answers

Is there any benefit in nVidia Tesla cards?

I'm planning to buy a serious GPU for running a parallel algorithm on (budget 2k-4k). Now I see everywhere supercomputers featuring nVidia Tesla GPU cards "made especially for GPGPU". While this seems very nice on first sight, a better reading makes…
user1111929
  • 6,050
  • 9
  • 43
  • 73
5
votes
2 answers

100% GPU utilization on a GCE without any processes

I've just started an instance on a Google Compute Engine with 2 GPUs (Nvidia Tesla K80). And straight away after the start, I can see via nvidia-smi that one of them is already fully utilized. I've checked a list of running processes and there is…
Vit D
  • 193
  • 1
  • 7
  • 25
5
votes
0 answers

Python: How do we parallelize a python program to take advantage of a GPU server?

In our lab, we have NVIDIA Tesla K80 GPU accelerator computing with the following characteristics: Intel(R) Xeon(R) CPU E5-2670 v3 @2.30GHz, 48 CPU processors, 128GB RAM, 12 CPU coresrunning under Linux 64-bit. I am running the following code which…
Desta Haileselassie Hagos
  • 23,140
  • 7
  • 48
  • 53
5
votes
2 answers

Advantages of Tesla over GeForce

I've read some information that I could find over the Internet about differences between those 2 series of cards, but I can't help the feeling that they are somehow advertisements. While most powerful GeForce costs roughly $700, starting prices for…
Raven
  • 4,783
  • 8
  • 44
  • 75
4
votes
1 answer

OpenCL on Nvidia Tesla: No platforms found

I have access to a system running Debian 7 with two Nvidia Tesla cards installed. I'd like to do some benchmarking using OpenCL. However, OpenCL fails to find any compatible platforms. Do I need any additional libraries or special drivers in order…
Daniel Becker
  • 771
  • 1
  • 7
  • 25
4
votes
2 answers

maximum number of threads on gpu

I am using TESLA T10 device and it has 2 cuda devices and maximum number of threads in a block is 512 and maximum threads along each dimension is (512,512,64) and maximum grid size is (65535,65535,1) and it has 30 multiprocessors on each cuda…
user2182259
  • 89
  • 1
  • 6
4
votes
4 answers

Disabled ECC support for Tesla C2070 and Ubuntu 12.04

I have a headless workstation running Ubuntu 12.04 server and recently installed new Tesla C2070 card, but when running the examples from the CUDA SDK, I get the following error: NVIDIA_GPU_Computing_SDK/C/bin/linux/release% ./reduction [reduction]…
user1651156
  • 53
  • 1
  • 5
3
votes
3 answers

Bleak (python) does not respond on connect

I have found the correct Bluetooth address of the device I want to connect to. When I run the code below, it prints "Connecting to device..." but then hangs and never prints "Connected" or finishes running. No errors are thrown. import asyncio from…
kbrin-1372
  • 135
  • 1
  • 8
3
votes
1 answer

Using CUDA compiled for compute capability 3.7 on Maxwell GPUs?

My development workstation(s) currently have NVIDIA Quadro K2200 and K620. Both of which have CUDA compute capability 5.0. However, the final production system has a Tesla K80 which has CUDA compute capability 3.7. Is it possible to install and…
3
votes
1 answer

total number of threads on nvidia Tesla

What is the total number of threads that can run concurrently on an nvidia Tesla, say S1070.
mkri
  • 31
  • 2
1
2 3 4 5 6