1

Recently I updated from CUDA 6.0 to CUDA 7.0, and my CUDA programs with unified memory allocation stopped working (other programs without unified memory still work, and the CUDA 7.0 template in Visual Studio 2013 still works). Following What is the canonical way to check for errors using the CUDA runtime API?, I found out that cudaMallocManaged() returns “operation not supported” error. This behavior only started happening since the update.

My graphics card is a GeForce GTX 780M with compute capability 3.0. My programs are compiled with Visual Studio 2013 targeting 64-bit platform, with arch/code pair being compute_30,sm_30. I am using Windows 10 Pro Insider Preview Evaluation Copy Build 10074. My GeForce driver version is 349.90.

The CUDA 7.0 UnifiedMemoryStreams sample outputs:

GPU Device 0: "GeForce GTX 780M" with compute capability 3.0

Unified Memory not supported on this device

The CUDA 7.0 deviceQuery sample outputs:

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 780M"
  CUDA Driver Version / Runtime Version          7.0 / 7.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 4096 MBytes (4294967296 bytes)
  ( 8) Multiprocessors, (192) CUDA Cores/MP:     1536 CUDA Cores
  GPU Max Clock rate:                            797 MHz (0.80 GHz)
  Memory Clock rate:                             2500 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.0, CUDA Runtime Version = 7.0, NumDevs = 1, Device0 = GeForce GTX 780M
Result = PASS

Here's a minimum sample that outputs "operation not supported":

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>

#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, char *file, int line, bool abort = true)
{
    if (code != cudaSuccess)
    {
        fprintf(stderr, "GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
        if (abort) exit(code);
    }
}

int main()
{
    int N = 2048;
    int *a;
    gpuErrchk(cudaMallocManaged(&a, N * sizeof(int)));
    a[0] = 0;

    cudaFree(a);
    cudaDeviceReset();
}

Update:

I have downgraded from Windows 10 to Windows 8.1 with all other factors unchanged, and now cudaMallocManaged() works flawlessly (UnifiedMemoryStreams outputs):

GPU Device 0: "GeForce GTX 780M" with compute capability 3.0

Executing tasks on host / device
Task [0], thread [0] executing on device (512)
Task [1], thread [0] executing on device (207)
...
Task [38], thread [0] executing on device (417)
Task [39], thread [0] executing on device (563)
All Done!

Tested on GeForce driver version both 347.62 and 350.12. So yeah, hold off on that Windows upgrade for now if you develop with CUDA 7.0...

Community
  • 1
  • 1
  • I have tried rebooting. Sadly it didn't work either. –  May 15 '15 at 10:43
  • 1
    Windows 10 isn't a supported CUDA platform, so I would be surprised if anything actually worked, let alone unified memory. – talonmies May 15 '15 at 11:12

2 Answers2

2

Windows 10 is not a supported platform for CUDA at this time. Switch to a supported platform, and your Unified Memory operations should begin working again.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
1

At last, you can use unified memory on Windows 10. Driver 355.98 implements it.

Yale Zhang
  • 1,447
  • 12
  • 30
  • For reference purposes, how did you discover this information? I can't seem to find much mention of CUDA in nVidia's driver change-logs. (I am wondering whether Managed memory support is also still outstanding for Windows 10.) – Xharlie Sep 24 '15 at 08:34
  • I didn't discover it in any documentation. I just ran my test program, so it was by chance. Given the 2 months wait since the release of Windows 10, it feels like NVIDIA is taking their time to do things right, instead of rushing like with AMD did with HBM. – Yale Zhang Sep 24 '15 at 09:10