-2

I understand that OpenGL 3 was released in 2008, that it has a dramatic API change and that it is not supported if your computer/GPU is too old. For example, I know that OpenGL 3 API does not work on my GPU (my computer was bought in 2009 and it has ATI Mobility Radeon HD 4530).

My questions are:

  • Is there some way to convert OpenGL 3 / OpenGL 4 code into OpenGL 2 code?
  • If not, does it mean that my computer won't be able to run new software that was written with OpenGL 3+ ?
  • Do most of the graphics developers still use the old API (i.e. OpenGL 2), to support also older machines?
  • How hard is it to study OpenGL new API, when you're already familiar with the old one? Is the knowledge of the old API helpful?
SomethingSomething
  • 11,491
  • 17
  • 68
  • 126
  • 1
    no. yes. yes (I believe fallback graphics techniques are common). no and yes. The "compatibility" GL profile gives both feature sets, and I've found knowing more about the old fixed pipeline gives me ideas about how to reproduce it with GL3. I wouldn't say [the shift](http://stackoverflow.com/a/26009113/1888983) is that radical, more that a few features were removed to simplify it. – jozxyqk Aug 08 '15 at 11:01
  • Thank you very much for your answer! – SomethingSomething Aug 08 '15 at 18:02

1 Answers1

0

1) There is no way to directly convert OpenGL 3.x or 4.x code so it will run on a system that only supports GL2 unless you are willing to write a lot of emulation code.
Not just the paradigms are different (this is indeed rather easy to do, there exist for example immediate mode libraries for OpenGL3, one could conceivably do the opposite and have a class for buffer objects that can be "mapped" and "unmapped" and that emits a lot of glVertex calls). The hardware also supports a whole lot of functionality as part of the render pipeline that simply isn't present in older hardware (such as instancing, geometry shaders, tesselation). Some of these (say, instancing) are more trivial to emulate, others are harder (geo shaders). It is of course generally possible to somehow emulate everything, but that is such an awful lot of work that it just doesn't make sense.

2) Your computer may be able to run software that uses OpenGL 3, provided that you have a software emulation layer that supports this. MESA supports OpenGL 3.1 in the mean time, so that would be "yes" (also, nVidia once upon a time had some kind of "tuneup tool" which would allow some GPUs to use features that they didn't really support, via emulation... but I doubt this included something like complete GL3 support, nor would it work for an ATI card anyway).
Otherwise, the answer is, of course: No.

3) It depends. Hard to tell what "most" do. But does it matter anyway? You need to do what is right for your application, not what "most" do.
For a candy crushing tile-and-sprite game that people play on their phone on their way home from work, GL2 (or ES) is perfectly suitable. Developers doing something like that will use the old API because it works and it is a lot more straightforward to get something running in little time.
Most others won't since not only performance is considerably worse for more demanding applications, but also you do not have shaders and other very useful functionality available. Also, virtually all reasonably current hardware supports at least GL3.

GL3 hardware has been available for low-two-digit amounts of currency for at least 5 years, and GL4 hardware has been available for that price for some time, too. I've been preaching for half a decade (I might be wrong, but this is my point of view) that if someone cannot or does not want to afford to buy a new card for $20, then they won't pay for your software either. So they are not "serious" customers that I want to consider. Besides, a card that is unable to do GL3 is -- due to lack of compute power -- unable to do most stuff at an acceptable frame rate, too. Which means that even if these people are serious, paying customers, they will be unhappy customers who will raise a lot of complaints.
Your approach might vary, of course, but my belief is that it's best to simply stay away from these customers. It's an unwise approach to already plan for unhappy users.

OpenGL3 has a compatibility profile, but if you ask me, this was a nonsensical decision from the committee's side. While nVidia still recommends to just use the compatibility profile with the promise of supporting it indefinitely and with the claim that it will run none slower but quite possibly faster, I don't agree with that point of view.
While enabling the compatibility profile as such may be "harmless" and may indeed not cause any performance issues, it allows you to deliberately or inadvertedly use functionality that does not map well to how the hardware works, and it even allows you to do things that are conceptually "wrong". Using a core profile may be painful and seem needlessly restrictive when you're used to use the old functionality, but the restrictions really only help you write better code.

4) Knowledge of the old API is actually rather hindring. You should in my opinin forget it as soon as you can. The only advantage of the legacy API is that it's dead simple to bring your first triangle onto the screen (something that's already a bit of a challenge with about 40 lines of code using GL3). But once you want to do something a little more evolved, the initially high entry barrier of GL3/GL4 doesn't matter any more at all. On the contrary. Once you're used to it, it's so much easier to just throw a whole buffer of indirect draw commands and a whole buffer of vertex data at the driver and let it crunch through it.

If you have not started learning anything yet, only learn the modern approach, and never look back. With some luck, Vulkan will have a finished spec this year (yep, I'm a dreamer, but one can hope!). This will, again, be somewhat radically different from how the GL API looks like. But then again, if you are already used to do "mostly AZDO" style GL, it won't be that much of a change.

Damon
  • 67,688
  • 20
  • 135
  • 185