0

I'm trying to port my OpenGL game/app engine to Vulkan.

Most of Vulkan samples and tutorials are oriented to function demonstration and therefore their content are not dynamic, means that vertex, transformation and texture are pre-arranged specifically for each demo. However to build a real game/app engine to render complete dynamic content, I need to make sure the Vulkan pipeline is able to render content that can not be pre-arranged, there are two major challenges:

  • Dynamic geometry - each game scene may have 10 to 100+ objects, each object may have several sub-meshes.
  • Dynamic texture - each game scene may have 10 to 50 textures organized as a shared texture warehouse. Each object or its sub-mesh use different textures from the texture warehouse.

I have find the best answer for the first question from other experienced Vulkan developers - using dynamic DescriptorSet with different biding offset to pass in per-object model matrix and it works well.

Now I'm still looking for the best solution for the second question, i.e., dynamically switching texture when rendering each objects/meshes. Remember the object count and texture count are unpredictable, there is no way to hard-code them in fragment shader.

Some Vulkan developers give some suggestions in several stackoverflow threads (like this one), basically there are three major solutions for texture switching:

  • (1) Bind per-mesh DesciptorSets inside the render loop.
    • (a) It could be numbers of per-mesh texture descriptor set
    • (b) It could be a single texture descriptor but bound at per-mesh binding points.
  • (2) Use array textures + index by push-constant

    The big limitation is that all textures in different layers must be exactly same size, which make it not very useful for game/app engine

  • (3) Use Descriptor array + index by push-constant

    like this one in my shader: layout(set = 2, binding = 0) uniform texture2D textures[TEXTURE_ARRAY_SIZE]; The big limitation is that the maximum array size varies on each platforms, the worst case is on iOS, only 31 textures can be used, forced by underlying Metal API. Other platforms also have quite limited count: Android: 79, macOS: 128.

My own thoughts:

So far I tend to use the solution (1), but still have more questions about it:

solution (1)-(a) - I need to create dedicated texture descriptor sets for every single mesh, there is no big concern about the maximum allowed sampled image descriptor number in the pool, here is what I collected: nVidia: 1048576, AMD: 4294967295, Intel: 1200, Android/Snapdragon:768, macOS:256, iOS:62. But some developers say that binding texture on per mesh based may affect performance?

solution (1)-(b) - The good part of this is only creating a single sampled image descriptor set and bind it at different binding position. I think this would be impossible since all binding positions must be hardcoded in the fragment shader by "binding = x", so I think the person who suggests this solution does not mean to use it for dynamic content rendering.

Finally, I tend to use the solution (1)-(a). But still woluld like to hear from other Vulkan developers to see if that solution has some performance concerns or there might be some better solution.

PS: when I recall how we switch textures in OpenGL, each texture was given a unique ID by glGenTextures(), then with this ID we use glBindTexture() to choose which texture to use when rendering the mesh. Is there a way to simulate this mechanism in Vulkan without too much performance penalty?

Hongkun Wang
  • 717
  • 8
  • 22
  • "*there is no big concern about the maximum allowed sampled image descriptor number in the pool*" That's not how descriptor pools work. The limits on the number/type of descriptors you can allocate from a pool are what you say they are, when you create that pool. The limits you quote appear to be the number of sampled images in an *array of samplers*. – Nicol Bolas Jan 20 '19 at 02:31
  • "*make it not very useful for game/app engine*" In a game engine, the *engine* controls what the data is. If you are trying to make an engine that has no limitations, then you're sacrificing performance for flexibility. Good engines strike a balance between flexibility and performance, and at the end of the day, forcing many textures to be of a specific size is hardly a huge deal. – Nicol Bolas Jan 20 '19 at 02:34
  • Lastly, it's not clear exactly how this question is different from the question you link to. – Nicol Bolas Jan 20 '19 at 02:35
  • Thanks Nicol for all your suggestions. – Hongkun Wang Jan 20 '19 at 02:47
  • The number I collected is from VkPhysicalDeviceProperties.limits.maxDescriptorSetSampledImages. So if this is not the one I should check, what's the correct property should I check for the maximum sampled image descriptor? – Hongkun Wang Jan 20 '19 at 02:49
  • Hi Nicol. I did not mean to create a game engine with unlimited capability, but with some reasonable limit, for example, support rendering up to 1000 objects or sub-mesh per scene, support using up to 100 textures. – Hongkun Wang Jan 20 '19 at 02:51
  • My question is indeed same as the one I linked to. I post it here to see if there some OpebGL driver developers can have a better solution. There was an OpenGL developer gave me tips about how a OpenGL function is implemented. I understand that all OpenGL drivers is based on single-thread and is not suitable for Vulkan engine design in terms of performance. – Hongkun Wang Jan 20 '19 at 02:56
  • "*what's the correct property should I check for the maximum sampled image descriptor?*" It's not clear what question you're trying to answer. If you're doing a thing where you change descriptor sets for each model that uses a different texture, then the only limitation is on the number of descriptor sets. But there *is no limitation* on the number of descriptor sets. `maxDescriptorSetSampledImages` is the limit on the number of sampled images that a pipeline layout can have at one time (across all shaders in that pipeline) and therefore can access at any one time. – Nicol Bolas Jan 20 '19 at 03:09
  • Thanks Nicol, the number I collected is exactly what I need. I plan to use a single pipeline and so need to know how many sampled image descriptors this pipeline can use at the same time. – Hongkun Wang Jan 20 '19 at 04:07
  • "*the number I collected is exactly what I need.*" ... No, it isn't. Not if your description of what you're trying to do is correct. Your 1b says: "It could be a single texture descriptor but bound at per-mesh binding points.". That has *nothing to do* with how many sampled image descriptors the pipeline can use at once. – Nicol Bolas Jan 20 '19 at 04:09
  • I ehcked the Vulkan API again and yes you are right. I need to do a bit more study to know that out of a descriptor pool, how many sampled image descriptor i can created. – Hongkun Wang Jan 20 '19 at 04:32
  • Does it means that if I would like to support up to 100 texture, I can simply fill the pool size structure `std::vector descriptorPoolSize` like `{ VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE, 100}`? – Hongkun Wang Jan 20 '19 at 04:37

0 Answers0