10

The target language is C/C++ and the program has only to work on Linux, but platform independent solutions are preferred obviously. I run Xorg, XVideo and OpenGL are available.

How many FPS can I expect on 1024x768 on an Intel Core 2 Duo with Intel graphics? (ONLY drawing counts, consider the arrays to be ready in RAM; no precise prognosis needed)

Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
Johannes Weiss
  • 52,533
  • 16
  • 102
  • 136
  • What is a "2D array of color triplets"? A nice modern computer, with some hardware acceleration should be able to put quite a few triangles on a screen at a rate of more than 30fps, without storing anything in VRAM. Putting VRAM to use is easy, though, and will boost that rate even higher. – Jay Kominek Feb 02 '09 at 17:07
  • a RGB-triplet. For every pixel I've got 3 values (one red, one green and one blue) – Johannes Weiss Feb 02 '09 at 20:37
  • SDL version http://stackoverflow.com/questions/28279242/how-to-render-a-pixel-array-most-efficiently-to-a-window-in-c – Ciro Santilli OurBigBook.com Apr 07 '16 at 22:08

5 Answers5

10

The fastest way to draw a 2D array of color triplets:

  1. Use float (not byte, not double) storage. Each triplet consists of 3 floats from 0.0 to 1.0 each. This is the format implemented most optimally by GPUs (but use greyscale GL_LUMINANCE storage when you don't need hue - much faster!)
  2. Upload the array to a texture with glTexImage2D
  3. Make sure that the GL_TEXTURE_MIN_FILTER texture parameter is set to GL_NEAREST
  4. Map the texture to an appropriate quad.

This method is slightly faster than glDrawPixels (which for some reason tends to be badly implemented) and a lot faster than using the platform's native blitting.

Also, it gives you the option to repeatedly do step 4 without step 2 when your pixmap hasn't changed, which of course is much faster.

Libraries that provide only slow native blitting include:

  • Windows' GDI
  • SDL on X11 (on Windows it provides a fast opengl backend when using HW_SURFACE)
  • Qt

As to the FPS you can expect, drawing a 1024x768 texture on an Intel Core 2 Duo with Intel graphics: about 60FPS if the texture changes every frame and >100FPS if it doesn't.

But just do it yourself and see ;)

  • SDL texture sprites on X11 also seems to use hardware acceleration today. See also: http://stackoverflow.com/questions/21392755/difference-between-surface-and-texture-sdl-general and try the `test/testspriteminimal.c` example on version 2.0 in Ubuntu 15.10, nvidia-settings says that GPU usage goes up to 100%, and FPS looks high. – Ciro Santilli OurBigBook.com Apr 08 '16 at 10:43
8

I did this a while back using C and OpenGL, and got very good performance by creating a full screen sized quad, and then use texture mapping to transfer the bitmap onto the face of the quad.

Here's some example code, hope you can make use of it.

#include <GL/glut.h>
#include <GL/glut.h>

#define WIDTH 1024
#define HEIGHT 768

unsigned char texture[WIDTH][HEIGHT][3];             

void renderScene() {    

    // render the texture here

    glEnable (GL_TEXTURE_2D);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

    glTexImage2D (
        GL_TEXTURE_2D,
        0,
        GL_RGB,
        WIDTH,
        HEIGHT,
        0,
        GL_RGB,
        GL_UNSIGNED_BYTE,
        &texture[0][0][0]
    );

    glBegin(GL_QUADS);
        glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0, -1.0);
        glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0, -1.0);
        glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0,  1.0);
        glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0,  1.0);
    glEnd();

    glFlush();
    glutSwapBuffers();
}

int main(int argc, char **argv) {
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);

    glutInitWindowPosition(100, 100);
    glutInitWindowSize(WIDTH, HEIGHT);
    glutCreateWindow(" ");

    glutDisplayFunc(renderScene);

    glutMainLoop();

    return 0;
}
jandersson
  • 1,559
  • 1
  • 15
  • 24
  • 1
    I would recommend this technique. Two minor points: 1) depending on GPU, non-power of two texture sizes might not be supported. 2) for subsequent frames, it's better to use glTexSubImage() on an existing texture, for performance reasons. – codelogic Feb 02 '09 at 19:53
  • 1
    OpenGL does not work this way any more. You may be able to get away in some cases with emulating the fixed function pipeline but it is not a good idea. The new GL stuff is a great improvement on the older stuff but sadly the setup is kind of intense. – Jessy Diamond Exum Mar 13 '15 at 07:21
1

If you're trying to dump pixels to screen, you'll probably want to make use of sdl's 'surface' facuility. For the greatest performance, try to arrange for the input data to be in a similar layout to the output surface. If possible, steer clear of setting pixels in the surface one at a time.

SDL is not a hardware interface in its own right, but rather a portability layer that works well on top of many other display layers, including DirectX, OpenGL, DirectFB, and xlib, so you get very good portability, and its a very thin layer on top of those technologies, so you pay very little performance overhead on top of those.

TokenMacGuy
  • 975
  • 1
  • 8
  • 7
1

Other options apart from SDL (as mentioned)

  • Cairo surfaces with glitz (in C, works on all plaforms but best in Linux)
  • QT Canvas (in C++, multiplaform)
  • OpenGL raw API or QT OpenGL (You need to know openGL)
  • pure Xlib/XCB if you want to take into account non-opengl plaforms

My suggestion

  1. QT if you prefer C++
  2. Cairo if you prefer C
kazanaki
  • 7,988
  • 8
  • 52
  • 79
0

the "how many fps can i expect" question can not be answered seriously. not even if you name the grandpa of the guy who did the processor layouting. it depends on tooooo many variables.

  • how many triplets do you need to render?
  • do they change between the frames?
  • at which rate (you wont notice the change if its more often than 30times a sec)?
  • do all of the pixels changes all of the time or just some of the pixels in some areas?
  • do you look at the pixels without any perspective distortion?
  • do you always see all the pixels?
  • depending on the version of the opengl driver you will get different results

this could go on for ever, the answer depends absolutly on your algorithm. if you stick to the opengl approach you could also try different extensions (http://www.opengl.org/registry/specs/NV/pixel_data_range.txt comes to mind for example), to see if it fits your needs better; although the already mentioned glTexSubImage() method is quite fast.

akira
  • 6,050
  • 29
  • 37