27

I've worked on a variety of demo projects with OpenGL and C++, but they've all involved simply rendering a single cube (or similarly simple mesh) with some interesting effects. For a simple scene like this, the vertex data for the cube could be stored in an inelegant global array. I'm now looking into rendering more complex scenes, with multiple objects of different types.

I think it makes sense to have different classes for different types of objects (Rock, Tree, Character, etc), but I'm wondering how to cleanly break up the data and rendering functionality for objects in the scene. Each class will store its own array of vertex positions, texture coordinates, normals, etc. However I'm not sure where to put the OpenGL calls. I'm thinking that I will have a loop (in a World or Scene class) that iterates over all the objects in the scene and renders them.

Should rendering them involve calling a render method in each object (Rock::render(), Tree::render(),...) or a single render method that takes in an object as a parameter (render(Rock), render(Tree),...)? The latter seems cleaner, since I won't have duplicate code in each class (although that could be mitigated by inheriting from a single RenderableObject class), and it allows the render() method to be easily replaced if I want to later port to DirectX. On the other hand, I'm not sure if I can keep them separate, since I might need OpenGL specific types stored in the objects anyway (vertex buffers, for example). In addition, it seems a bit cumbersome to have the render functionality separate from the object, since it will have to call lots of Get() methods to get the data from the objects. Finally, I'm not sure how this system would handle objects that have to be drawn in different ways (different shaders, different variables to pass in to the shaders, etc).

Is one of these designs clearly better than the other? In what ways can I improve upon them to keep my code well-organised and efficient?

Preet Kukreti
  • 8,417
  • 28
  • 36
Jeff
  • 2,149
  • 3
  • 17
  • 19

3 Answers3

22

Firstly, don't even bother with platform independence right now. Wait until you have a much better idea of your architecture.

Doing a lot of draw calls/state changes is slow. The way that you do it in an engine is you generally will want to have a renderable class that can draw itself. This renderable will associated to whatever buffers it needs (e.g. vertex buffers) and other information (like vertex format, topology, index buffers etc). Shader input layouts can be associated to vertex formats.

You will want to have some primitive geo classes, but defer anything complex to some type of mesh class which handles indexed tris. For a performant app, you will want to batch up calls (and potentially data) for similar input types in your shading pipeline to minimise unnecessary state changes and pipeline flushes.

Shader's parameters and textures are generally controlled via some material class that is associated to the renderable.

Each renderable in a scene itself is usually a component of a node in a hierarchical scene graph, where each node usually inherits the transform of its ancestors through some mechanism. You will probably want a scene culler that uses a spatial partitioning scheme to do fast visibility determination and avoid draw call overhead for things out of view.

The scripting/behaviour part of most interactive 3D apps is tightly connected or hooked into its scene graph node framework and an event/messaging system.

This all fits together in a high level loop where you update each subsystem based on time and draw the scene at current frame.

Obviously there are tonnes of little details left out but it can become very complex depending on how generalised and performant you want to be and what kind of visual complexity you are aiming for.

Your question of draw(renderable), vs renderable.draw() is more or less irrelevant until you determine how all the parts fit together.

[Update] After working in this space a bit more, some added insight:

Having said that, in commercial engines, its usually more like draw(renderBatch) where each render batch is an aggregation of objects that are homogeneous in some meaningful way to the GPU, since iterating over heterogeneous objects (in a "pure" OOP scene graph via polymorphism) and calling obj.draw() one-by-one has horrible cache locality and is generally an inefficient use of GPU resources. It is very useful to take a data-oriented approach to designing how an engine talks to its underlying graphics API(s) in the most efficient way possible, batching up things as much as possible without negatively effecting the code structure/readability.

A practical suggestion is to write a first engine using a naive/"pure" approach to get really familiar with the domain space. Then on a second pass (or probably rewrite), focus on the hardware: things like memory representation, cache locality, pipeline state, bandwidth, batching, and parallelism. Once you really start considering these things, you will realise that most of your initial design goes out the window. Good fun.

Yun
  • 3,056
  • 6
  • 9
  • 28
Preet Kukreti
  • 8,417
  • 28
  • 36
  • 1
    Thanks, that gives me a few ideas to work with. One thing I don't quite understand is the distinction between the renderable class and the mesh class you mentioned. Wouldn't I want the mesh class to be a renderable that can draw itself? At a higher level, I think the design is going to be more complicated than I anticipated. Do you know of any online resources that provide a good introduction to designing a rendering system? Most OpenGL tutorials I've found introduce the process of drawing, texturing, and lighting a few triangles, without much discussion of bigger picture architecture. – Jeff Mar 26 '12 at 20:00
  • @Jeff: No. A mesh is just a collection of vertex data. A mesh *alone* is not enough. To render something, you need a mesh, the shader you want to render with it, and the other assorted state (textures, etc). – Nicol Bolas Mar 27 '12 at 00:37
  • @Jeff what Nicol said is correct. Not everything that you draw would be a mesh (e.g. you may also want to draw lines or some other primitive like quads where a mesh may be overkill). A mesh class should describe geometry (and sometimes material index per tri, and grouping/hierarchy) but not much else. Look into books that talk about game engine development (even if you arent making a game). A good one is "Game Engine Architecture", also the "Game Programming Gems" and "Game Engine Gems" have a lot of good info. You may also want to lurk the gamedev.net forums – Preet Kukreti Mar 27 '12 at 01:26
4

I think OpenSceneGraph is kind of an answer. Take a look at it and its implementation. It should provide you with some interesting insights on how to use OpenGL, C++ and OOP.

Stefan Hanke
  • 3,458
  • 2
  • 30
  • 34
napcode
  • 111
  • 1
  • 1
    Thanks for the suggestion; I'll check it out and see if I can get something out of it. I think what I really need, though, is something a bit smaller in scale, a stepping stone somewhere between the typical OpenGL tutorials of rendering a lit, textured, spinning cube (for example) and a full graphics library. – Jeff Mar 26 '12 at 20:07
1

Here is what I have implemented for a physical simulation and what worked pretty well and was on a good level of abstraction. First I'd separate the functionality into classes such as:

  • Object - container that holds all the necessary object information
  • AssetManager - loads the models and textures, owns them (unique_ptr), returns a raw pointer to the resources to the object
  • Renderer - handles all OpenGL calls etc., allocates the buffers on GPU and returns render handles of the resources to the object (when wanting the renderer to draw the object I call the renderer giving it model render handle, texture handle and model matrix), renderer should aggregate such information o be able to draw them in batches
  • Physics - calculations that use the object along with it's resources (vertices especially)
  • Scene - connects all the above, can also hold some scene graph, depends on the nature of the application (can have multiple graphs, BVH for collisions, other representations for draw optimization etc.)

The problem is that GPU is now GPGPU (general purpose gpu) so OpenGL or Vulkan is not only a rendering framework anymore. For example physical calculations are being performed on the GPU. Therefore the renderer might now transform into something like GPUManager and other abstractions above it. Also the most optimal way to draw is in one call. In other words, one big buffer for the whole scene that can be also edited via compute shaders to prevent excessive CPU<->GPU communication.

Hitokage
  • 733
  • 8
  • 19
  • 1
    sorry, I'm late to the party. I like your answer but couldn't figure out `allocating buffer on GPU`. Did you mean `glGenBuffer(...)` ? If so why AssetManager doesn't handle that as well? – Onur A. Sep 19 '18 at 21:02
  • 1
    @OnurA. Hey, good point. Asset manager loads the models etc in RAM and Renderer takes care of everything about GPU. The point is to be able to replace the GPU module when for example switching from OGL to Vulkan. This way the only thing to do is to use a different Renderer class instead of editing and replacing all OGL functions in other modules such as AssetManager. – Hitokage Sep 21 '18 at 17:19
  • @OnurA. No problem, It's just my opinion though. – Hitokage Sep 23 '18 at 14:44