I'm trying to port my OpenGL game/app engine to Vulkan.
Most of Vulkan samples and tutorials are oriented to function demonstration and therefore their content are not dynamic, means that vertex, transformation and texture are pre-arranged specifically for each demo. However to build a real game/app engine to render complete dynamic content, I need to make sure the Vulkan pipeline is able to render content that can not be pre-arranged, there are two major challenges:
- Dynamic geometry - each game scene may have 10 to 100+ objects, each object may have several sub-meshes.
- Dynamic texture - each game scene may have 10 to 50 textures organized as a shared texture warehouse. Each object or its sub-mesh use different textures from the texture warehouse.
I have find the best answer for the first question from other experienced Vulkan developers - using dynamic DescriptorSet with different biding offset to pass in per-object model matrix and it works well.
Now I'm still looking for the best solution for the second question, i.e., dynamically switching texture when rendering each objects/meshes. Remember the object count and texture count are unpredictable, there is no way to hard-code them in fragment shader.
Some Vulkan developers give some suggestions in several stackoverflow threads (like this one), basically there are three major solutions for texture switching:
- (1) Bind per-mesh DesciptorSets inside the render loop.
- (a) It could be numbers of per-mesh texture descriptor set
- (b) It could be a single texture descriptor but bound at per-mesh binding points.
- (2) Use array textures + index by push-constant
The big limitation is that all textures in different layers must be exactly same size, which make it not very useful for game/app engine
- (3) Use Descriptor array + index by push-constant
like this one in my shader: layout(set = 2, binding = 0) uniform texture2D textures[TEXTURE_ARRAY_SIZE];
The big limitation is that the maximum array size varies on each platforms, the worst case is on iOS, only 31 textures can be used, forced by underlying Metal API. Other platforms also have quite limited count: Android: 79, macOS: 128.
My own thoughts:
So far I tend to use the solution (1), but still have more questions about it:
solution (1)-(a) - I need to create dedicated texture descriptor sets for every single mesh, there is no big concern about the maximum allowed sampled image descriptor number in the pool, here is what I collected: nVidia: 1048576, AMD: 4294967295, Intel: 1200, Android/Snapdragon:768, macOS:256, iOS:62
. But some developers say that binding texture on per mesh based may affect performance?
solution (1)-(b) - The good part of this is only creating a single sampled image descriptor set and bind it at different binding position.
I think this would be impossible since all binding positions must be hardcoded in the fragment shader by "binding = x"
, so I think the person who suggests this solution does not mean to use it for dynamic content rendering.
Finally, I tend to use the solution (1)-(a). But still woluld like to hear from other Vulkan developers to see if that solution has some performance concerns or there might be some better solution.
PS: when I recall how we switch textures in OpenGL, each texture was given a unique ID by glGenTextures()
, then with this ID we use glBindTexture()
to choose which texture to use when rendering the mesh. Is there a way to simulate this mechanism in Vulkan without too much performance penalty?