The goal would be to get the optimum of performance, when every major calculation is done on the GPU, to make huge simulations in the browser possible, to achieve something like the powdertoy or more advanced.
And since WebGPU shipped on chrome, it might be possible already. But I have never done shader programming so far, so I was a bit scared about the complexity involved, but looking at some tutorials gives me the impression it is easy enough and makes me quite enthusiastic about the possibilities of having logic and graphics side on side on the GPU, without having to go back to the CPU.
So maybe I am overlooking something. ChatGPT also advised against it, but couldn't give me any fundamental reason, why it would not be possible, if I can handle the dirty work close to the metal.
What I could find is: Shader programms are typically very small. So maybe it wouldn't even make everything much faster, even if I implemented all the simulation and UI logic inside WGSL, for some technical reasons? (assuming I won't make major misstakes, which I am aware are easy to do)
So why am I not finding anyone, who has done something already?
Is the code size limited, or are too many control flows elements blocking performance, or are simply the tools missing, to do it properly?
There is surely lots of stuff to learn like:
Then I found that the size of a working group (number of operations to calculate in a single batch) is set in code to be as big as the matrix side. It works fine until the matrix side is lower than the number of ALUs on GPU (arithmetic logical unit) which is reflected in WebGPU API as a maximumWorkingGroupSize property. https://pixelscommander.com/javascript/webgpu-computations-performance-in-comparison-to-webgl/
But even if not optimized, it should still outperform js and wasm? If not, why not?