A CUDA context hold state information for controlling computational work on a CUDA device, including memory allocations, loaded modules of code, memory region mappings etc.
Questions tagged [cuda-context]
46 questions
25
votes
2 answers
What is a CUDA context?
Can anyone please explain or refer me some good source about what is a CUDA context? I searched CUDA developer guide and I was not satisfied with it.
Any explanation or help will be great.

Vraj Pandya
- 591
- 1
- 9
- 13
15
votes
1 answer
How to implement handles for a CUDA driver API library?
Note: The question has been updated to address the questions that have been raised in the comments, and to emphasize that the core of the question is about the interdependencies between the Runtime- and Driver API
The CUDA runtime libraries (like…

Marco13
- 53,703
- 9
- 80
- 159
11
votes
2 answers
Multiple CUDA contexts for one device - any sense?
I thought I had the grasp of this but apparently I do not:) I need to perform parallel H.264 stream encoding with NVENC from frames that are not in any of the formats accepted by the encoder so I have a following code pipeline:
A callback informing…

Rudolfs Bundulis
- 11,636
- 6
- 33
- 71
8
votes
2 answers
How to create a CUDA context?
How can I create a CUDA context?
The first call of CUDA is slow and I want to create the context before I launch my kernel.

Arkerone
- 1,971
- 2
- 22
- 35
6
votes
1 answer
cuda context creation and resource association in runtime API applications
I want to understand how a cuda context is created and associated with a kernel in cuda runtime API applications?
I know it is done under the hood by driver APIs. But I would like to understand the timeline of the creation.
For a start I know…

ash
- 1,170
- 1
- 15
- 24
5
votes
1 answer
CUDA streams and context
I am using an application presently that spawns a bunch of pthreads (linux), and each of those creates it's own CUDA context. (using cuda 3.2 right now).
The problem I am having is that it seems like each thread having its own context costs a lot…

Derek
- 11,715
- 32
- 127
- 228
5
votes
1 answer
What does cudaSetDevice() do to a CUDA device's context stack?
Suppose I have an active CUDA context associated with device i, and I now call cudaSetDevice(i). What happens? :
Nothing?
Primary context replaces the top of the stack?
Primary context is pushed onto the stack?
It actually seems to be…

einpoklum
- 118,144
- 57
- 340
- 684
5
votes
1 answer
Is it possible to share a Cuda context between applications?
I'd like to pass a Cuda context between two independent Linux processes (using POSIX message queues, which I already have set up).
Using cuCtxPopCurrent() and cuCtxPushCurrent(), I can get the context pointer, but this pointer is referenced in the…

Chris Gregg
- 2,376
- 16
- 30
4
votes
1 answer
Why are OpenGL and CUDA contexts memory greedy?
I develop software which usually includes both OpenGL and Nvidia CUDA SDK. Recently, I also started to seek ways to optimize run-time memory footprint. I noticed the following (Debug and Release builds differ only by 4-7 Mb):
Application startup -…

Michael IV
- 11,016
- 12
- 92
- 223
4
votes
1 answer
what is meant by GPU Context,GPU hardware channel in NVIDIA'S architecture
while reading some papers related to GPU computing, i stuck in understanding theses two terms GPU Context,and GPU hardware channel bellow is brief mention to them ,but i can't understand what they mean,
Command: The GPU operates using the…

HATEM EL-AZAB
- 331
- 1
- 3
- 11
3
votes
1 answer
Can multiple processes share one CUDA context?
This question is a followup on Jason R's comment to Robert Crovellas answer on this original question ("Multiple CUDA contexts for one device - any sense?"):
When you say that multiple contexts cannot run concurrently, is this
limited to kernel…

alex
- 10,900
- 15
- 70
- 100
3
votes
1 answer
Reset Cuda Context after exception
I have a working app which uses Cuda / C++, but sometimes, because of memory leaks, throws exception. I need to be able to reset the GPU on live, my app is a server so it has to stay available.
I tried something like this, but it doesnt seems to…

Autruche
- 67
- 1
- 6
3
votes
1 answer
Do I need provide Gpu context when creating unified memory?
Question 1)
When I call CUDA driver API, usually I need first push the context (which represents a GPU runtime) to current thread. For normal cuMalloc, the memory will be allocated on that GPU specified by the context. But if I try to call…

Xiang Zhang
- 2,831
- 20
- 40
3
votes
1 answer
Difference on creating a CUDA context
I've a program that uses three kernels. In order to get the speedups, I was doing a dummy memory copy to create a context as follows:
__global__ void warmStart(int* f)
{
*f = 0;
}
which is launched before the kernels I want to time as…

pQB
- 3,077
- 3
- 23
- 49
2
votes
1 answer
cuMemAlloc'ing memory in one CUDA context, and freeing it in another - why does this succeed?
I create 2 cuda context “ctx1” and "ctx2" and set current context to "ctx1" and allocate 8 bytes of memory and switch current context to ctx2. Then free Memory alloc in ctx1. Why does this return CUDA_SUCCESS?
And when I destroy ctx1 and then free…

DplusT
- 21
- 1