Assume that I want to perform parallel computations on a large fixed object, e.g. a fixed large sparse (directed) graph, or any similar kind of object.
To do any reasonable computations on this graph or object, such as random walks in the graph, putting the graph in global memory is presumably out of the question for speed reasons.
That leaves local/private memory. If I have understood the GPU architecture correct, there is virtually no speed difference between (read-only) access of local or private memory, is that correct? I'm reluctant to copy the graph to private memory, since this would mean that every single work unit has to store the entire graph, which could eat away the GPU's memory very quickly (and for very large graphs even reducing the number of cores that can be used and/or make the OS unstable).
So, assuming I'm correct above on the read speed of local vs private, how do I do this in practice? If e.g. for simplification I reduce the graph to an int[] from
and an int[] to
(storing begin and end of each directed edge), I can of course make the kernel look like this
computeMe(__local const int *to, __local const int *from, __global int *result) {
//...
}
but I don't see how I should call this from JOCL, since no private/local/global modifier is given there.
Will the local variables be written automatically to the memory of each local workgroup? Or how does this work? It's not clear to me at all how I should be doing this memory assignment correctly.