2

I'm planning to rewrite a small C++ OpenGL font library I made a while back using FreeType 2 since I recently discovered the changes to newer OpenGL versions. My code uses immediate mode and some function calls I'm pretty sure are deprecated now, e.g. glLineStipple.

I would very much like to support a range of OpenGL versions such that the code uses e.g. VBO's when possible or falls back on immediate mode if nothing else is available and so forth. I'm not sure how to go about it though. Afaik, you can't do a compile time check since you need a valid OpenGL context created at runtime. So far, I've come up with the following proposals (with inspiration from other threads/sites):

  • Use GLEW to make runtime checks in the drawing functions and to check for function support (e.g. glLineStipple)

  • Use some #define's and other preprocessor directives that can be specified at compile time to compile different versions that work with different OpenGL versions

  • Compile different versions supporting different OpenGL versions and supply each as a separate download

  • Ship the library with a script (Python/Perl) that checks the OpenGL version on the system (if possible/reliable) and does the approapriate modifications to the source so it fits with the user's version of OpenGL

  • Target only newer OpenGL versions and drop support for anything below

I'm probably going to use GLEW anyhow to easily load extensions.

FOLLOW-UP: Based on your very helpful answers, I tried to whip up a few lines based on my old code, here's a snippet (no tested/finished). I declare the appropriate function pointers in the config header, then when the library is initialized, I try to get the right function pointers. If VBOs fail (pointers null), I fall back to display lists (deprecated in 3.0) and then finally to vertex arrays. I should (maybe?) also check for available ARB extensions if fx. VBOs fail to load or is that too much work? Would this be a solid approrach? Comments are appreciated :)

#if defined(WIN32) || defined(_WIN32) || defined(__WIN32__)
    #define OFL_WINDOWS
    // other stuff...
    #ifndef OFL_USES_GLEW
        // Check which extensions are supported
    #else
        // Declare vertex buffer object extension function pointers
        PFNGLGENBUFFERSPROC          glGenBuffers          = NULL;
        PFNGLBINDBUFFERPROC          glBindBuffer          = NULL;
        PFNGLBUFFERDATAPROC          glBufferData          = NULL;
        PFNGLVERTEXATTRIBPOINTERPROC glVertexAttribPointer = NULL;
        PFNGLDELETEBUFFERSPROC       glDeleteBuffers       = NULL;
        PFNGLMULTIDRAWELEMENTSPROC   glMultiDrawElements   = NULL;
        PFNGLBUFFERSUBDATAPROC       glBufferSubData       = NULL;
        PFNGLMAPBUFFERPROC           glMapBuffer           = NULL;
        PFNGLUNMAPBUFFERPROC         glUnmapBuffer         = NULL;
    #endif
#elif some_other_system

Init function:

#ifdef OFL_WINDOWS
    bool loaded = true;

    // Attempt to load vertex buffer obejct extensions
    loaded = ((glGenBuffers          = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers"))                   != NULL && loaded);
    loaded = ((glBindBuffer          = (PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer"))                   != NULL && loaded);
    loaded = ((glVertexAttribPointer = (PFNGLVERTEXATTRIBPOINTERPROC)wglGetProcAddress("glVertexAttribPointer")) != NULL && loaded);
    loaded = ((glDeleteBuffers       = (PFNGLDELETEBUFFERSPROC)wglGetProcAddress("glDeleteBuffers"))             != NULL && loaded);
    loaded = ((glMultiDrawElements   = (PFNGLMULTIDRAWELEMENTSPROC)wglGetProcAddress("glMultiDrawElements"))     != NULL && loaded);
    loaded = ((glBufferSubData       = (PFNGLBUFFERSUBDATAPROC)wglGetProcAddress("glBufferSubData"))             != NULL && loaded);
    loaded = ((glMapBuffer           = (PFNGLMAPBUFFERPROC)wglGetProcAddress("glMapBuffer"))                     != NULL && loaded);
    loaded = ((glUnmapBuffer         = (PFNGLUNMAPBUFFERPROC)wglGetProcAddress("glUnmapBuffer"))                 != NULL && loaded);

     if (!loaded)
        std::cout << "OFL: Current OpenGL context does not support vertex buffer objects" << std::endl;
    else {
        #define OFL_USES_VBOS
        std::cout << "OFL: Loaded vertex buffer object extensions successfully"
        return true;
    }

    if (glMajorVersion => 3.f) {
        std::cout << "OFL: Using vertex arrays" << std::endl;
        #define OFL_USES_VERTEX_ARRAYS
    } else {
        // Display lists were deprecated in 3.0 (although still available through ARB extensions)
        std::cout << "OFL: Using display lists"
        #define OFL_USES_DISPLAY_LISTS
    }
#elif some_other_system
Gilles 'SO- stop being evil'
  • 104,111
  • 38
  • 209
  • 254
NordCoder
  • 403
  • 1
  • 7
  • 21
  • As a side note to your mentioning of `glLineStipple`, you can look at [this question](http://stackoverflow.com/q/6017176/743214) and its answer (self-advertisement, cough, cough) to see how to emulate its behaviour using modern forward-compatible features. – Christian Rau May 29 '12 at 15:50
  • Hehe, no problem. Self-promotion is fine by me :) I actually saw that thread while I was searching the internet for some guidelines before I posted my question. Unfortunately, I, like you, lack the necessary hardware for geometry shader support. Although, I'll keep the 1d texture solution in mind. Also, I guess I could just generate and cache the required vertices on the CPU, then pass them to the GPU through an internal shader? – NordCoder May 29 '12 at 20:11
  • The problem without a GS is just, that you cannot make the texture coordinates (that index the stipple pattern) dependent on the line's screen size that easily, as the original `glLineStipple` works in screen space. But maybe this isn't neccessary in your case. – Christian Rau May 29 '12 at 21:59
  • I don't know really, but stippled lines are not my main concern atm, and it probably won't be for a time. If it's too much work I'll just leave it out. This project is not something I plan to be competitive with other libraries, it's purely educational. – NordCoder Jun 02 '12 at 12:41

2 Answers2

5

First of all, and you're going to be safe with that one, because it's supported everywhere: Rewrite your font renderer to use Vertex Arrays. It's only a small step from VAs to VBOs, but VAs are supported everywhere. You only need a small set of extension functions; maybe it made sense to do the loading manually, to not be dependent on GLEW. Linking it statically was huge overkill.

Then put the calls into wrapper functions, that you can refer to through function pointers so that you can switch render paths that way. For example add a function "stipple_it" or so, and internally it calls glLineStipple or builds and sets the appropriate fragment shader for it.

Similar for glVertexPointer vs. glVertexAttribPointer.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • Thanks. I know about VAs but I was unaware they had so much support. I think it'll be pretty easy/educational to manually load extensions across windows, macos(x) and linux, which would be even better if I could drop external support on GLEW. One question though: How would I know functions to store in my function pointers? I.e. how would I actually check that glLineStipple is available? The extension loading would take care of the extensions of course. – NordCoder May 29 '12 at 12:46
  • 1
    VA got introduced with OpenGL-1.1 – this is the version Windows ships its trampoline DLL with since 1996. VAs never very something "new" or "special" or only partially supported. *So use them!* – datenwolf May 29 '12 at 13:37
  • Anything below 1.1 (even 1.0) is just plain unrealistic (if even existing) anyway. You wouldn't think about using texturing without texture objects. – Christian Rau May 29 '12 at 15:52
3

If you do want to make every check by hand, then you won't get away from some #defines because Android/iOS only support OpenGL ES and then the runtime checks would be different.

The run-time checks are also almost unavoidable because (from personal experience) there are a lot of caveats with different drivers from different hardware vendors (for anything above OpenGL 1.0, of course).

"Target only newer OpenGL versions and drop support for anything below" would be a viable option, since most of the videocards by ATI/nVidia and even Intel support some version of OpenGL 2.0+ which is roughly equivalent to the GL ES 2.0.

GLEW is a good way to ease the GL extension fetching. Still, there are issues with the GL ES on embedded platforms.

Now the loading procedure:

  1. On win32/linux just check the function pointer for not being NULL and use the ExtensionString from GL to know what is supported at this concrete hardware

  2. The "loading" for iOS/Android/MacOSX would be just storing the pointers or even "do-nothing". Android is a different beast, here you have static pointers, but the need to check extension. Even after these checks you might not be sure about some things that are reported as "working" (I'm talking about "noname" Android devices or simple gfx hardware). So you will add your own(!) checks based on the name of the videocard.

  3. OSX/iOS OpenGL implementation "just works". So if you're running on 10.5, you'll get GL2.1; 10.6 - 2.1 + some extensions which make it almost like 3.1/3.2; 10.7 - 3.2 CoreProfile. No GL4.0 for Macs yet, which is mostly an evolution of 3.2.

If you're interested in my personal opinion, then I'm mostly from the "reinvent everything" camp and over the years we've been using some autogenerated extension loaders.

Most important, you're on the right track: the rewrite to VBO/VA/Shaders/NoFFP would give you a major performance boost.

Viktor Latypov
  • 14,289
  • 3
  • 40
  • 55
  • And by #define's do you mean user-specified #define's? Some good points here I hadn't considered. I guess the sane way would be to combine #define's and extension loading as necessary. Then have runtime checks for VBOs vs. VAs vs. shaders etc.? (which would be done through extension loading checks I presume?) – NordCoder May 29 '12 at 12:50
  • 1
    No, I mostly mean the "standard" defines for the platform. E.g., if you see the _WIN32 define, then you just have to use wglGetProcAddress() and so on ("__linux__" stuff for the X Window/GLX and "__APPLE__" for iOS/OSX, where, by the way, everything is nicely packed into the OpenGL "framework" and so you may use the function without any loading code, because the Mac's hardware is guaranteed to support them). – Viktor Latypov May 29 '12 at 12:55
  • 1
    Ok, I see. The defines for windows, mac and linux are already in place actually, so I only would need to add the appropriate extension loading code for each platform. Neat for apple, I guess that's stems from their control of the OpenGL implementation. I guess I would still need to check that the function is actually available for calling (even though I don't need to manually load it)? – NordCoder May 29 '12 at 13:05
  • Yes, on win32/linux just check the function pointer for not being NULL and use the ExtensionString from GL to know what is supported at this concrete hardware. – Viktor Latypov May 29 '12 at 13:12
  • Thanks for the quick replies :) Glad to hear my considerations were somewhat on the right track. – NordCoder May 29 '12 at 13:17