1

Assume that I want to perform parallel computations on a large fixed object, e.g. a fixed large sparse (directed) graph, or any similar kind of object.

To do any reasonable computations on this graph or object, such as random walks in the graph, putting the graph in global memory is presumably out of the question for speed reasons.

That leaves local/private memory. If I have understood the GPU architecture correct, there is virtually no speed difference between (read-only) access of local or private memory, is that correct? I'm reluctant to copy the graph to private memory, since this would mean that every single work unit has to store the entire graph, which could eat away the GPU's memory very quickly (and for very large graphs even reducing the number of cores that can be used and/or make the OS unstable).

So, assuming I'm correct above on the read speed of local vs private, how do I do this in practice? If e.g. for simplification I reduce the graph to an int[] from and an int[] to (storing begin and end of each directed edge), I can of course make the kernel look like this

computeMe(__local const int *to, __local const int *from, __global int *result) {
     //...
}

but I don't see how I should call this from JOCL, since no private/local/global modifier is given there.

Will the local variables be written automatically to the memory of each local workgroup? Or how does this work? It's not clear to me at all how I should be doing this memory assignment correctly.

user1111929
  • 6,050
  • 9
  • 43
  • 73

2 Answers2

3

You can't pass values for local memory arguments from the host. The host cannot read/write local memory. To use local memory, you still need to pass the data in as global, then copy from global to local before you use it. This is only beneficial if you are reading the data many times.

How about constant memory? If your input data does not change and it not too large, putting your input data into constant memory might give you a considerable speedup. The available constant memory is typically around 16K to 64K.

computeMe(__constant int *to, __constant int *from, __global int *result) {
 //...
}

Edit (add references):

For an example use of __local memory in OpenCL, see here.

For NVidia hardware, more performance details are NVidia OpenCL best practices guide (PDF). In there, there is more information on performance differences between the memory types.

prunge
  • 22,460
  • 3
  • 73
  • 80
  • Does local memory have a comparable speed as constant? I'm iterating over graphs, so I constantly need things like `for (edge=0; edge – user1111929 Jan 11 '12 at 09:59
  • Also, I did not manage to find any explanation anywhere to write something to the local (non-private) memory. How should one do this then? The host can only write to global and const, the kernel can only write to global and private. How can I put something in the memory that is shared among all work units in a workgroup? There is 32K LOCAL_MEM available per compute unit, I would feel stupid not to use this... is there no way at all in OpenCL to write something to this space? Or will OpenCL do this automatically when I move a const variable to private in every kernel? – user1111929 Jan 11 '12 at 10:06
  • @user1111929 kernels can read and write __local memory, in fact this is the only way data gets into local memory. Constant memory doesn't require this initial copy, which means you don't need extra code to copy global to local memory and barrier in your kernel. I don't know about the speed differences between __local and __constant memory, but a pure guess on my part if that they are similar, and both are faster than __global. – prunge Jan 11 '12 at 21:42
  • @user1111929 added references and example to response. – prunge Jan 11 '12 at 21:46
  • Thanks for the links, I have looked at that best practices guide, and it says that "constant memory is cached", but it mentions nowhere what this means. Would you have any idea what it could be? I fail to find any reasonable explaining... Also, the constant memory is off-chip and the local memory is on-chip, right? While constant surely is faster than global, it is off-chip so it should suffers a much larger penalty (in terms of latency) than local memory, right...? – user1111929 Jan 14 '12 at 03:50
  • "constant memory is cached" means that it is essentially global memory that, when read for the first time, is cached in faster memory, and "For all threads of a half warp, reading from the constant cache is as fast as reading from a register as long as all threads read the same address. Accesses to different addresses by threads within a half warp are serialized, so cost scales linearly with the number of different addresses read by all threads within a half warp." It would be worth experimenting with both types of memory to see what gives your particular problem the best performance. – prunge Jan 15 '12 at 21:45
1

You wrote "putting the graph in global memory is presumably out of the question for speed reasons." - well you don't have much other choices. I mean the data is in general in the global memory.

(As a side note - In particular cases you might recast it to textures (if the element format is suitable). Also the so called 'constant' memory on nvidia is optimized for 'broadcast' type of operations meaning all threads read from the same location, which i guess it's not the case. I would suggest to stay away from these types at the beginning.)

Ok, as advice, first try simply to use the 'global memory'. The local memory's 'lifetime' is only during the execution of the kernel. It is justified only if you re-use the same data element more than once (think it about as a cache memory where you explicitely preload).

Also local mem is limited to about 16-48kbytes, so it can store only a portion of your data. Try to decompose your graph in subgraphs that fits into these blocks.

In your representation you could partition the edges (from[], to[]) into fixed-size groups.

the generic pattern is

step 1. copy from global to local

your_local_array[ get_local_id(0) ] = input_global_mem[ get_global_id(0) ]

step 2. make sure every thread does the op barrier(local mem fence)

now, the work-items (threads) can work on the subgraph loaded in the local memory.

remember, the local mem will contain only a limited portion of the entire graph. if you need to access arbitrary nodes from any threads, the above pattern will not be usable.

I suggest for beginning to make experiments with the algorithm without using local memory (read directly from global) and make sure it works correctly (usually there are some surprises on the road). Later you can identify which data portions you might store in local mem to speed it up.

isti_spl
  • 706
  • 6
  • 10