16

By (5, 5) I mean exactly the fifth row and fifth column.

I found it very hard to draw things using screen coordinates, all the coordinates in OpenGL is relative, and usually ranging from -1.0 to 1.0. Why it is so serious to prevent programmers from using screen coordinates / window coordinates?

xzhu
  • 5,675
  • 4
  • 32
  • 52
  • 1
    Because using [-1..1] instead of [0..800] will also work with a 1024p screen. But since *you* know the size of your screen, do as datenwolf and basszero said. – Calvin1602 May 27 '11 at 23:18
  • 3
    OpenGL is what happens when computer science standards are designed by mathematicians. If CS folks and UX folks were present there, they'd include simple functions to set resolution, text mode and ability to [draw thin lines](http://artgrammer.blogspot.com/2011/05/drawing-nearly-perfect-2d-line-segments.html) with [subpixel precision](http://www.antigrain.com/doc/introduction/introduction.agdoc.html) out of the box. – anatoly techtonik Apr 10 '13 at 12:11
  • http://stackoverflow.com/questions/5467218/opengl-2d-hud-over-3d – Ciro Santilli OurBigBook.com Apr 12 '16 at 10:53
  • Thankkkkk youuuuu! It seems like the internet is silent on this. – Andrew Apr 01 '17 at 17:13

6 Answers6

25

The simplest way is probably to set the projection to match the pixel dimensions of the rendering space via glOrtho. Then vertices can be in pixel coordinates. The downside is that resizing the window could cause problems and you're mostly wasting the accelerated transforms.

Assuming a window that is 640x480:

// You can reverse the 0,480 arguments depending on you Y-axis 
// direction preference
glOrtho(0, 640, 0, 480, -1, 1);

Frame buffer objects and textures are another avenue but you'll have to create your own rasterization routines (draw line, circle, bitmap, etc). There are problaby libs for this.

Christian Rau
  • 45,360
  • 10
  • 108
  • 185
basszero
  • 29,624
  • 9
  • 57
  • 79
7

@dandan78 OpenGL is not a Vector Graphics renderer. Is a Rasterizer. And in a more precise way is a Standard described by means of a C language interface. A rasterizer, maps objects represented in 3D coordinated spaces (a car, a tree, a sphere, a dragon) into 2D coordinated spaces (say a plane, your app window or your display), these 2d coordinates belong to a discrete coordinated plane. The counter rendering method of rasterization is Ray Tracing.

Vector graphics is a way to represent by means of mathematical functions a set of curves, lines or similar geometrical primitives, in a nondiscrete way. So Vector graphics is in the "model representation" field rather than "rendering" field.

J.Guarin
  • 91
  • 1
  • 3
3

You can just change the "camera" to make 3D coordinates match screen coordinates by setting the modelview matrix to identity and the projection to an orthographic projection (see my answer on this question). Then you can just draw a single point primitive at the required screen coordinates.

You can also set the raster position with glWindowPos (which works in screen coordinates, unlike glRasterPos) and then just use glDrawPixels to draw a 1x1 pixel image.

Community
  • 1
  • 1
Christian Rau
  • 45,360
  • 10
  • 108
  • 185
1
glEnable( GL_SCISSOR_TEST );
glScissor( 5, 5, 1, 1 ); /// position of pixel
glClearColor( 1.0f, 1.0f, 1.0f, 0.0f ); /// color of pixel
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_SCISSOR_TEST );

By changing last 2 arguments of glScissor you can also draw pixel perfect rectangle.

LovelyHanibal
  • 317
  • 2
  • 13
0

I did a bit of 3D programming several years back and, while I'm far from an expert, I think you are overlooking a very important difference between classical bitmapped DrawPixel(x, y) graphics and the type of graphics done with Direct3D and OpenGL.

Back in the days before 3D, computer graphics was mostly about bitmaps, which is to say collections of colored dots. These dots had a 1:1 relationship with the pixels on your monitor.

However, that had numerous drawbacks, including making 3D very difficult and requiring bitmaps of different sizes for different display resolutions.

In OpenGL/D3D, you are dealing with vector graphics. Lines are defined by points in a 3-dimensional coordinate space, shapes are defined by lines and so on. Surfaces can have textures, lights can be added, as can various types of lighting effects etc. This entire scene, or a part of it, can then be viewed through a virtual camera.

What you 'see' though this virtual camera is a projection of the scene onto a 2D surface. We're still dealing with vector graphics at this point. However, since computer displays consist of discrete pixels, this vector image has to be rasterized, which transforms the vector into a bitmap with actual pixels.

To summarize, you can't use screen/window coordinates because OpenGL is based on vector graphics.

dandan78
  • 13,328
  • 13
  • 64
  • 78
  • 1
    Then who rasterizes the vector data processed by OpenGL? If it is OpenGL itself, I think it should at least leave a few APIs that let the users slightly modified the final result, or better off leave a window letting users read/write the pixel buffers. – xzhu May 27 '11 at 11:37
  • 1
    You can allocate a frame buffer object, render your scene into it, then do all the pixel painting stuff you want with it. – n0rd May 27 '11 at 11:41
  • I believe it's the graphics hardware that does the final rasterization. As for the rest of your comment, I don't know enough to be able to answer that, but the other answers appear to have solutions that will do the trick. :) – dandan78 May 27 '11 at 12:08
  • @trVoldemort: OpenGL is just an API: Application Programming **Interface**; the specification leves the actual rasterizing to an abstract machine and goes even so far, that the resulting pictures of identical drawing commands and data input into two different implementations don't need to be identical; pixel perfect reproducibility must only be warrented for amongst identical implementations of OpenGL. – datenwolf May 27 '11 at 13:26
  • 11
    That being said it's really easy to map OpenGL coordinates to viewport coordinates: `glViewport(win_offset_x, win_offset_y, win_width, win_height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(win_offset_x, win_offset_x+win_width, win_offset_y, win_offset_y+win_height, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity();` et voìla, OpenGL coordinates now map to viewport pixel coordinates. – datenwolf May 27 '11 at 13:28
  • 8
    Wow, THIS was the accepted answer? It didn't even the answer the question, it just says, "What you're trying to do is impossible," which is a flat out lie. – Andrew Apr 01 '17 at 17:15
  • 3
    This answer and the fact that anyone upvoted it really pisses me off. – Andrew Apr 01 '17 at 17:17
0

I know I'm very late to the party, but just in case someone has this question in the future. I converted screen coordinates to OpenGL matrix coordinates using these:

double converterX (double x, int window_width) {
    return 2 * (x / window_width) - 1;
}

double converterY (double y, int window_height) {
    return -2 * (y / window_height) + 1;
}

Which are basically re-scaling methods.

Sdacm0
  • 1
  • 1