2

I have a grayscale texture (8000*8000) , the value of each pixel is an ID (actually, this ID is the ID of triangle to which the fragment belongs, I want to using this method to calculate how many triangles and which triangles are visible in my scene).

now I need to count how many unique IDs there are and what are them. I want to implement this with GLSL and minimize the data transfer between GPU RAM and RAM.

The initial idea I come up with is to use a shader storage buffer, bind it to an array in GLSL, its size is totalTriangleNum, then iterate through the ID texture in shader, increase the array element by 1 that have index equal to ID in texture.

After that, read the buffer to OpenGL application and get what I want. Is this a efficient way to do so? Or are there some better solutions like compute-shader (well I'm not familiar with it) or something else.

Wang Yudong
  • 125
  • 7
  • What you are describing sounds like a compute shader to me. I don't know what other kind of shader you would want to use for this. – Dietrich Epp May 16 '16 at 11:33
  • @DietrichEpp I simply want to use a fragment shader to do that, and I have just heard about compute shader. – Wang Yudong May 16 '16 at 12:31
  • A fragment shader doesn't make sense because you're not doing any drawing. – Dietrich Epp May 16 '16 at 12:43
  • I only need the fragment shader doing the caluclation then write into the buffer, then I read that buffer, no drawing is needed. – Wang Yudong May 16 '16 at 12:50
  • @DietrichEpp You can have fragment shaders without any outputs. So with a high enough GLSL version, you could write histogram values into a SSBO using a fragment shader without outputs. I agree that a compute shader is the better choice, but it seems possible with a fragment shader as well. – Reto Koradi May 17 '16 at 05:59

1 Answers1

5

I want to using this method to calculate how many triangles and which triangles are visible in my scene)

Given your description of your data let me rephrase that a bit:

You want to determine how many distinct values there are in your dataset, and how often each value appears.

This is commonly known as a Histogram. Unfortunately (for you) generating histograms are among the problems not that trivially solved on GPUs. Essentially you have to divide down your image into smaller and smaller subimages (BSP, quadtree, etc.) until divided down to single pixels on which you perform the evaluation. Then you backtrack propagating up the sub-histograms, essentially performing an insertion or merge sort on the histogram.

Generating histograms with GPUs is still actively researched, so I suggest you read up on the published academic works (usually accompanied with source code). Keywords: Histogram, GPU

This one is a nice paper done by the AMD GPU researchers: https://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_preprint.pdf

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • thanks for providing me some directions about this problem. – Wang Yudong May 16 '16 at 12:32
  • 1
    I wouldn't say that calculating a histogram on a GPU is difficult. A very basic implementation is fairly trivial. The challenge only comes in when trying to make it as efficient as possible, where there's various optimization options, and which approach works best can be very hardware dependent. When looking for literature, a histogram is an example of what's commonly called a "reduction operation". All vendors should have whitepapers on how to implement those most efficiently. They might be more OpenCL than OpenGL, though. – Reto Koradi May 17 '16 at 06:04
  • 1
    @RetoKoradi: Note that I wrote "not trivially solved", which does means something different than "difficult". You can for example try to iterate over all pixels using a loop in a shader (tivial solution), but then you may find that this executes in only a single warp/wavefront (i.e. not very parallel) and may hit a timeout. So you have to change your strategy and thereby it becomes nontrivial. But that doesn't make it a difficult problem, you just can't do it the naive way anymore. – datenwolf May 17 '16 at 08:39