5

I want to write pixels directly to to screen (not using vertices and polygons). I have investigated a variety of answers to similar questions, the most notable ones here and here.

I see a couple ways drawing pixels to the screen might be possible, but they both seem to be indirect and use unnecessary floating point operations:

  1. Draw a GL_POINT for each pixel on the screen. I've tried this and it works, but this seems like an inefficient way to draw pixels onto the screen. Why write my data in floating-points when it's going to be transformed into an array of pixel data.

  2. Create a 2d quad that spans the entire screen and write a texture to it. Like the first options, this seems to be a roundabout way of putting pixels on the screen. The texture would still have to go through rasterization before getting put on the screen. Also textures must be square, and most screens are not square, so I'd have to handle that problem.

How do I get, a matrix of colors, where pixels[0][0] corresponds to the upper left corner and pixels[1920][1080] corresponds to the bottom right, onto the screen in the most direct and efficient way possible using OpenGL?

Writing directly to the framebuffer seems like the most promising choice, but I have only seen people using the framebuffer for shading.

Community
  • 1
  • 1
user2704267
  • 53
  • 1
  • 1
  • 3
  • 4
    "*Also textures must be sqare*" Since when? textures have *never* had to be square. Even when they were restricted to powers of two in dimension, the width and height did not have to be the same. – Nicol Bolas Aug 21 '13 at 15:54
  • If you don't need the 3D rendering itself, you can just use SDL: http://stackoverflow.com/a/36504803/895245 – Ciro Santilli OurBigBook.com Apr 14 '16 at 08:01

1 Answers1

5

First off: OpenGL is a drawing API designed to make use of a rasterizer system that ingests homogenous coordinates to define geometric primitives, which get transformed and, well rasterized. Merely drawing pixels is not what the OpenGL API is concerned with. Also most GPUs are floating point processors by nature and in fact can process floating point data more efficiently than integers.

Why write my data in floating-points when it's going to be transformed into an array of pixel data.

Because OpenGL is a rasterizer API, i.e. it takes primitive geometrical data and turns it into pixels. It doesn't deal with pixels as input data, except in the form of image objects (textures).

Also textures must be square, and most screens are not square, so I'd have to handle that problem.

Whoever told you that, or whereever you got that from: They are wrong. OpenGL-1.x had that constraint that textures had to be power-of-2 sized in either direction, but width and height may differ. Ever since OpenGL-2 texture sizes are completely arbitrary.

However a texture might not be the most efficient way to directly update single pixels on the screen either. It is however a great idea to first draw pixels of an pixel buffer, which for display is loaded into a texture, that then gets drawn onto a full viewport quad.

However if your goal is direct manipulation of on-screen pixels, without a rasterizer inbetween, then OpenGL is not the right API for the job. There are other, 2D graphics APIs that allow you to directly push pixels to the screen.

However pushing individual pixels is very inefficient. I strongly recomment operating on a pixel buffer, which is then blited or drawn as a whole for display. And doing it with OpenGL, drawing a full viewport, textured quad is as good for this, and as efficient as any other graphics API.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • 1
    Thanks, it seems I had a pretty fundamental misunderstanding of OpenGL's purpose as a whole. A quick follow-up: If I have a 1920x1080 display, and draw a quad with vertices (-1,0 -1.0. -1.0), (1,0 -1.0. -1.0),(1,0 1.0. -1.0),(-1,0 1.0. -1.0), and textured it with a 1920x1080 texture, would there be a 1:1 relationship between each texel and each pixel on the display (assuming no anti-aliasing, post-processsing, etc.)? – user2704267 Aug 22 '13 at 00:04
  • If the quad drawn is within the window bounds determined by pixel offset using an orthogonal matrix at screen origin 2d(0,0) then sure. However, just barely. If you were to implement texture atlases with texture coordinates they aren't 1 to 1, but for lack of remembering better words, transposed much differently. As long as you stick to 1 to 1 you are just fine. – user2262111 Dec 11 '18 at 06:33
  • If the goal is to address viewport pixels, then the most straigtforward method is not using normalized texture coordinates at all, but fetch texels by pixel coordinate, using the viewport pixel coordinate as input. I.e. in your fragment shader [`texelFetch`](https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/texelFetch.xhtml)`(texture, `[`gl_FragCoord`](https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_FragCoord.xhtml)`, 0);`. – datenwolf Dec 11 '18 at 09:56