1

How can we free GPU memory of an array with AleaGpu ? (on the GPU card)

Inside the function/sub, if we want to free GPU memory of array dOutputs,dInputs, how should we do it ?

1/ Will "dOutputs.Dispose(); dInputs.Dispose();" free the GPU memory ?

2/ Does a "GC.Collect()" for GPU exist ? is it necessary ?

3/ With AleaGpu, do we have a command to free to totaly the GPUmemory ?

    private void button3_Click(object sender, EventArgs e)
    {
        textBox3.Text = "";
        var worker = Worker.Default;
        const int rows = 10;
        const int cols = 5;
        var rng = new Random();
        var inputs = new double[rows, cols];
        for (var row = 0; row < rows; ++row)
        {
            for (var col = 0; col < cols; ++col)
            {
                inputs[row, col] = rng.Next(1, 100);
            }
        }
        var dInputs = worker.Malloc(inputs);
        var dOutputs = worker.Malloc<double>(rows, cols);
        var lp = new LaunchParam(1, 1);
        worker.Launch(Kernel, lp, dOutputs.Ptr, dInputs.Ptr, rows, cols);
        var outputs = new double[rows, cols];
        dOutputs.Gather(outputs);
        Assert.AreEqual(inputs, outputs);
        dOutputs.Dispose();
        dInputs.Dispose();" 
    }

3/ As GPU card have a limited Memory, we need to use Single/Int16/Int32 instead of double. I tried :

       var inputs = new Single[rows, cols];
       var dOutputs = worker.Malloc<Single>(rows, cols);
       var inputs2 = new Int16[rows, cols];

but

       worker.Launch(Kernel, lp, dOutputs.Ptr, dInputs.Ptr, rows, cols);

don't to take it. I get error "there is some invalid argument" .

How can we make worker.Launch(Kernel, lp, ...) take Int16,Int32 and single ?

Emmanuel
  • 21
  • 5

2 Answers2

2

The type returned by Worker.Malloc() is DeviceMemory, this represents one memory allocation on Gpu. It is disposable, so you can dispose it yourself or let GC to clean it. But note, if you rely on GC to collect, there is delay (it is done in a GC thread), and since Gpu memory is all pinned memory (cannot be swapped to disk), so it is recommended that you dispose it explicitly. To make code easier, you can use C# "using" keyword.

Of course, Alea Gpu works with those types, the problem you met is because you need specify the exact type. Note, a 1.0 is of type double, and 1.0f is of type single. The reason is, the kernel function is provided to worker launch method by delegate, so you need specify the correct data type to help it to find the kernel method. Implicit type conversion doesn't work here. For numeric literal you can reference here.

I did a small example code, and it works:

static void Kernel(deviceptr<float> data, float value)
{
    data[0] = value;
}

static void Kernel(deviceptr<short> data, short value)
{
    data[0] = value;
}

static void Main(string[] args)
{
    var worker = Worker.Default;
    var lp = new LaunchParam(1, 1);
    using (var dmemSingle = worker.Malloc<float>(1))
    using (var dmemShort = worker.Malloc<short>(1))
    {
        worker.Launch(Kernel, lp, dmemSingle.Ptr, 4.1f);
        worker.Launch(Kernel, lp, dmemShort.Ptr, (short)1);
    }
}
Community
  • 1
  • 1
Xiang Zhang
  • 2,831
  • 20
  • 40
0

You can have overload definitions on the GPU ! this is excellent. Like a regular code on CPU, you can have more than one definition for the same function on the GPU.

By coding a loop with "using", it is releasing the memory on the GPU as soon as you get out of the loop. Excellent !!!!!

Thank you

Emmanuel
  • 21
  • 5
  • Yes, overloading is good, but as you just give the kernel method by name "Kernel", so you must give parameters of the exact type, so we can find which "Kernel" you mean, that is why the implicit-conversion of numeric stuff doesn't work here. – Xiang Zhang Oct 22 '15 at 11:59