I used the code from How to make an OpenGL rendering context with transparent background? to create a window with transparent background. My problem is that the frame rate is very low - I have around 20 frames/sec even when I draw one quad(made from 2 triangles). I tried to find out why and glFlush() takes around 0.047 seconds. Do you have any idea why? Same thing is rendered in a window that does not have transparent background at 6000 fps(when I remove 60 fps limitation). It also takes one core to 100%. I test it on a Q9450@2.66GHz with ATI Radeon 4800 using Win7.
Asked
Active
Viewed 2,658 times
3
-
Did you consider using double buffering and `SwapBuffers()` instead of `glFlush()`? – arul Jan 24 '11 at 10:51
-
That technique is not good for rendering OpenGL animations, but it's the only way I'm aware to draw a transparent OpenGL window on pre-Vista Windows. This page has an interesting example, you may try to reverse it to find how its done: http://coreytabaka.com/programming/cube-demo/ – karlphillip Feb 01 '11 at 17:47
1 Answers
3
I think you can't get good performances this way, In the example linked there is the following code
void draw(HDC pdcDest)
{
assert(pdcDIB);
verify(BitBlt(pdcDest, 0, 0, w, h, pdcDIB, 0, 0, SRCCOPY));
}
BitBlt is a function executed on the processor, whereas the OpenGL functions are executed by the GPU. So the rendered data from the GPU as to crawl back to the main memory, and effectively the bandwidth from the GPU to the CPU is somewhat limited (even more because data as to go back there once BitBlt'ed).
If you really want transparent window with rendered content, you might want to look at Direct2D and/or Direct3D, maybe there is some way to do that without the performance penalty of data moving.

Raoul Supercopter
- 5,076
- 1
- 34
- 37
-
BitBlt function is executed in 0 (zero) ms. I don't know if that's the problem. I need to use OpenGL because I have to run this project on other platforms. – Mircea Ispas Jan 24 '11 at 10:26
-
4BitBlt is not per se the problem, but the problem is that it force a data transfert (which is the bottleneck) between the GPU & the main memory. – Raoul Supercopter Jan 24 '11 at 10:32