2

I'm trying to render a scene at 320x240 permanently to emulate the feel classic 240p systems, and upscale it to the full size of the screen.

The issue I have is that, although I can sort of fudge this effect using sprites of a set resolution and glOrtho, it often just renders completely at the higher resolution and ends up being maybe a little slower, and rotating a sprite would end up making it obvious that the resolution is a lot higher.

Is there some way to render the viewport at 320x240, and then upscale it to fit the screen, using OpenGL 1.1?

Right now I'm using this to set up the 2D viewport for rendering, where width and height are the width and height of the canvas.

GL11.glViewport(0, 0, width, height);
GL11.glDepthFunc(GL11.GL_LEQUAL);
GL11.glClear(GL11.GL_ACCUM);
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0.0D, 320, 240, 0.0D, 0.0D, 100.0D); //320 and 240 is the "true size"
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();

I've heard about rendering onto textures, however I cannot find any information on how to do this in OpenGL 1.1.

Hello234
  • 175
  • 2
  • 12

1 Answers1

5

I've heard about rendering onto textures, however I cannot find any information on how to do this in OpenGL 1.1.

That's because OpenGL 1.1 is from 1997 - literally a quarter century ago. There's no render-to-texture there.

The closest thing you get is glCopyTexImage2D, which allows you to copy framebuffer contents into a texture object (which technically would allow this to happen on the GPU side without a round-trip to CPU and system memory). So you basically render your data in the internal resolution - using only a portion of the framebuffer as the viewport, copy that into a texture, and then draw a rectangle on the full framebuffer with that texture and the desired filters.

But be warned that GL_MAX_TEXTURE_SIZE is guaranteed to be at least 64 in GL 1.1, so to write this in a way thatb it would work on any conformant GL 1.1 inmplementation, you would have to have code to split your framebuffer texture into 64x64 pixel tiles.

The other alternative - a roundtrip to system memory - would involve reading the data back with glReadPixels, and drawing it via again glDrawPixels while applying the glPixelZoom setting.

derhass
  • 43,833
  • 2
  • 57
  • 78
  • "literally a quarter century ago." Wow I suddenly feel ancient again. I'll definitely be trying this out! I'm using this as I'm playing with the idea of running this on Pentium II systems for the heck of it – Hello234 Jun 06 '22 at 01:56
  • 1
    Hmm, which kind of GL1.1 capable GPU would you have in a pentium2 era PC? Consumer GPUs of that time weren't really compatible to the full deal, the best you got were the "minigl" drivers, which basically was exactly the subset of features GLQuake did use. But I can't tell you if any of the two approaches I sketched in this answer would fall into that subset. – derhass Jun 06 '22 at 02:18
  • Worked like a charm! And It's an ATI Fire GL or something similar... It supports most stuff this game threw at it so far, and ran surprisingly not horrible. – Hello234 Jun 06 '22 at 05:05
  • 1
    Well, FireGL had been developed originally by 3DLabs, targeting the CAD and DCC market. It never was really a "consumer" GPU, so yeah, the GL support of these cards is more complete. – derhass Jun 06 '22 at 11:44