I've noticed that CUDA applications tend to have a rough maximum run-time of 5-15 seconds before they will fail and exit out. I realize it's ideal to not have CUDA application run that long but assuming that it is the correct choice to use CUDA and due to the amount of sequential work per thread it must run that long, is there any way to extend this amount of time or to get around it?
8 Answers
I'm not a CUDA expert, --- I've been developing with the AMD Stream SDK, which AFAIK is roughly comparable.
You can disable the Windows watchdog timer, but that is highly not recommended, for reasons that should be obvious.
To disable it, you need to regedit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Watchdog\Display\DisableBugCheck
, create a REG_DWORD and set it to 1.
You may also need to do something in the NVidia control panel. Look for some reference to "VPU Recovery" in the CUDA docs.
Ideally, you should be able to break your kernel operations up into multiple passes over your data to break it up into operations that run in the time limit.
Alternatively, you can divide the problem domain up so that it's computing fewer output pixels per command. I.e., instead of computing 1,000,000 output pixels in one fell swoop, issue 10 commands to the gpu to compute 100,000 each.
The basic unit that has to fit within the time slice is not your entire application, but the execution of a single command buffer. In the AMD Stream SDK, a long sequence of operations can be broken up into multiple time slices by explicitly flushing the command queue with a CtxFlush() call. Perhaps CUDA has something similar?
You should not have to read all of your data back and forth across the PCIX bus on every time slice; you can leave your textures, etc. in gpu local memory; you just have some command buffers complete occasionally, to prove to the OS that you're not stuck in an infinite loop.
Finally, GPUs are fast, so if your application is not able to do useful work in that 5 or 10 seconds, I'd take that as a sign that something is wrong.
[EDIT Mar 2010 to update:] (outdated again, see the updates below for the most recent information) The registry key above is out-of-date. I think that was the key for Windows XP 64-bit. There are new registry keys for Vista and Windows 7. You can find them here: http://www.microsoft.com/whdc/device/display/wddm_timeout.mspx or here: http://msdn.microsoft.com/en-us/library/ee817001.aspx
[EDIT Apr 2015 to update:] This is getting really out of date. The easiest way to disable TDR for Cuda programming, assuming you have the NVIDIA Nsight tools installed, is to open the Nsight Monitor, click on "Nsight Monitor options", and under "General" set "WDDM TDR enabled" to false. This will change the registry setting for you. Close and reboot. Any change to the TDR registry setting won't take effect until you reboot.
[EDIT August 2018 to update:] Although the NVIDIA tools allow disabling the TDR now, the same question is relevant for AMD/OpenCL developers. For those: The current link that documents the TDR settings is at https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys

- 53,703
- 9
- 80
- 159

- 9,546
- 3
- 35
- 41
-
15I'm not a SIMD programmer, nor do I play one on TV, but IMHO it's a bit too general to say that "Finally, GPUs are fast, so if your application is not able to do useful work in that 5 or 10 seconds, I'd take that as a sign that something is wrong." In scientific applications (like ones CUDA is often used for), sometimes you just have a lot to compute. – San Jacinto Aug 31 '10 at 14:44
-
San Jacinto: See Tom's answer below. The timeout is reasonable in the case where the GPU you are computing on is also your display GPU. In the case where it is not used for display then you have more options. – Ade Miller Mar 30 '11 at 00:48
-
It's definitely wrong to say that the watchdog shouldn't be disabled. The watchdog is completely broken: it triggers when single-stepping in the debugger, and it tends to completely freeze the system in multi-monitor/displayport configurations, which isn't any help to anyone. – Glenn Maynard Apr 03 '15 at 23:47
-
@Glenn. The NSight Cuda debugger has a software preemption mode so that it will not trigger the TDR while you're single-stepping with the debugger. Look for it under the NSight options menu. If you're using a GPU that has a display attached, the debugger will use that mode automatically. If you're using a GPU that doesn't have a display attached, then turning off the TDR or setting it to a really long value is reasonable. – Die in Sente Apr 05 '15 at 16:44
-
Given that the watchdog hard-crashes my whole system (with the lovely side-effect of making two of my monitors flash spastically, and making my speakers blast DMA loop noise), I think I'll stick with turning it off. – Glenn Maynard Apr 06 '15 at 17:23
-
@Glenn Unless you're still running Windows XP, a TDR should NOT hard-crash your whole system. It should just reset/restart the WDDM driver. The displays should blank out for a second or two and come back. Of course, any apps (Cuda or graphics) that were using the GPU will loose context and probably crash, but the symptoms you're describing should NOT happen with a TDR. – Die in Sente Apr 08 '15 at 05:17
-
@DieinSente Idealism is nice, but in the real world, it sure does crash Windows 7 for me. – Glenn Maynard Apr 08 '15 at 16:04
-
@Glenn Whatever works for you. But the WHOLE IDEA of TDR is for the OS to recover from a hung GPU and avoid exactly what you're experiencing. There must be something unusual wrong with your particular system that is causing a kernel-mode driver to crash. Round up the usual suspects. – Die in Sente Apr 09 '15 at 15:37
-
I can certainly agree with @GlennMaynard that sometimes a TDR timeout will lockup my machine and require me to reboot my machine (either it resets, or hasn't recovered after 2 minutes like this). However sometimes it also manages to recover. My personal speculation is that increasing `TdrDdiDelay` may fix this, as this appears to be the time limit for the WDDM driver to reset (particularly demanding work may cause it to take longer than the default of 5 seconds?). Details of `TdrDdiDelay` here: https://msdn.microsoft.com/en-us/library/windows/hardware/ff569918(v=vs.85).aspx – Robadob Mar 30 '16 at 13:10
-
This answer saves my life. I wasn't able to figure out why the kernel is failing randomly at difference places. – user3667089 Feb 09 '17 at 17:09
-
THANKS but how to turn off DPC watchdog for all drivers, not just for "Display" (via registry editor)? – DDRRSS Sep 15 '20 at 22:21
On Windows, the graphics driver has a watchdog timer that kills any shader programs that run for more than 5 seconds. Note that the Xorg/XFree86 drivers don't do this, so one possible workaround is to run the CUDA apps on Linux.
AFAIK it is not possible to disable the watchdog timer on Windows. The only way to get around this on Windows is to use a second card that has no displayed screens on it. It doesn't have to be a Tesla but it must have no active screens.

- 64,444
- 15
- 143
- 197
-
5Actually, on Windows any device with a WDDM driver will have the watchdog timer problem, whether it has a display attached or not. The NVIDA Tesla cards work around this by having a completely different type of driver (the TCC or Tesla Compute Cluster) driver, which doesn't identify the GPU to the OS as display adapter. If you just plug in a second video card (Radeon or GeForce) with no displays attached, it will still be recognized by the OS as a WDDM display adapter device, and the watchdog timer will still apply. – Die in Sente Mar 02 '14 at 23:17
Resolve Timeout Detection and Recovery - WINDOWS 7 (32/64 bit)
Create a registry key in Windows to change the TDR settings to a higher amount, so that Windows will allow for a longer delay before TDR process starts.
Open Regedit from Run or DOS.
In Windows 7 navigate to the correct registry key area, to create the new key:
HKEY_LOCAL_MACHINE>SYSTEM>CurrentControlSet>Control>GraphicsDrivers.
There will probably one key in there called DxgKrnlVersion there as a DWord.
Right click and select to create a new key REG_DWORD, and name it TdrDelay. The value assigned to it is the number of seconds before TDR kicks in - it > is currently 2 automatically in Windows (even though the reg. key value doesn't exist >until you create it). Assign it with a new value (I tried 4 seconds), which doubles the time before TDR. Then restart PC. You need to restart the PC before the value will work.
Source from Win7 TDR (Driver Timeout Detection & Recovery) I have also verified this and works fine.

- 5,008
- 8
- 47
- 63
The most basic solution is to pick a point in the calculation some percentage of the way through that I am sure the GPU I am working with is able to complete in time, save all the state information and stop, then to start again.
Update: For Linux: Exiting X will allow you to run CUDA applications as long as you want. No Tesla required (A 9600 was used in testing this)
One thing to note, however, is that if X is never entered, the drivers probably won't be loaded, and it won't work.
It also seems that for Linux, simply not having any X displays up at the time will also work, so X does not need to be exited as long as you screen to a non-X full-screen terminal.

- 2,020
- 2
- 23
- 23
-
If you're not loading X then you can use a script to load the CUDA driver. Check out the Getting Started guide (http://developer.download.nvidia.com/compute/cuda/3_2_prod/docs/Getting_Started_Linux.pdf) for more information. – Tom Jan 07 '11 at 15:22
This isn't possible. The time-out is there to prevent bugs in calculations from taking up the GPU for long periods of time.
If you use a dedicated card for CUDA work, the time limit is lifted. I'm not sure if this requires a Tesla card, or if a GeForce with no monitor connected can be used.

- 1,819
- 15
- 21
-
It would be useful to determine which of these cases it is. I'll have to try a non-tesla card with no monitor attached and find out. – rck Feb 02 '09 at 23:40
-
2I just tried this out. No Tesla card needed. Using Linux, I actually just didn't bother going into X and the Limit was lifted. – rck Feb 05 '09 at 21:39
-
So, as other answers suggest, it is actually possible... can you rephrase your answer? – einpoklum Dec 02 '13 at 07:02
The solution I use is:
1. Pass all information to device.
2. Run iterative versions of algorithms, where each iteration invokes the kernel on the memory already stored within the device.
3. Finally transfer memory to host only after all iterations have ended.
This enables control over iterations from CPU (including option to abort), without the costly device<-->host memory transfers between iterations.

- 17,324
- 5
- 69
- 111
The watchdog timer only applies on GPUs with a display attached.
On Windows the timer is part of the WDDM, it is possible to modify the settings (timeout, behaviour on reaching timeout etc.) with some registry keys, see this Microsoft article for more information.

- 20,852
- 4
- 42
- 54
-
Hi Tom, I have modified the watchdog timer already (to ~6days) and have managed to get a single kernel to run for 40 seconds. Ive just tried running a significantly larger one but i keep getting a "ErrorLaunch TimeOut" error. I only have a single GPU so i was wondering if there is something else which might be forcing the gpu to respond before its finished the kernel, esp since it should only take about 4-5 minutes to run and the timeout is set to such a large number? Thanks for your time, i really appreciate it. – Hans Rudel Jul 07 '13 at 00:28
It is possible to disable this behavior in Linux. Although the "watchdog" has an obvious purpose, it may cause some very unexpected results when doing extensive computations using shaders / CUDA.
The option can be toggled in your X-configuration (likely /etc/X11/xorg.conf)
Adding: Option "Interactive" "0" to the device section of your GPU does the job.
see CUDA Visual Profiler 'Interactive' X config option?
For details on the config
and
see ftp://download.nvidia.com/XFree86/Linux-x86/270.41.06/README/xconfigoptions.html#Interactive
For a description of the parameter.