I'm looking for some insight into XNA on Xbox 360, mainly if its possible to run vector-based float mathematics on its GPU?
If there's a way, can you point me into the right direction?
I'm looking for some insight into XNA on Xbox 360, mainly if its possible to run vector-based float mathematics on its GPU?
If there's a way, can you point me into the right direction?
I don't claim to be an expert on this, but hopefully this can point you in a helpful direction.
Is it possible? Yes. You probably already know that the GPU is good at such calculations (hence the question) and you can indeed control the GPU using XNA. Whether or not it will suit your needs is a different matter.
To make use of the GPU, you'll presumably want to write shaders using HLSL. There's a decent introduction to HLSL in the context of XNA at Reimers which you might want to run through. Notably, that tutorial focuses on making the GPU do graphics-related crunching, but what you write in the shaders is up to you. If your vector-based float math is for the purpose of rendering (and thus can stay in the GPU domain), you're in luck and can stop here.
Likely rendering onscreen is not what you're after. Now, you have a fair amount of flexibility in HLSL as far as doing your math goes. Getting the results out to the CPU, however, is not how the system was designed. This is getting fuzzy for me, but Shawn Hargreaves (an XNA dev) states on more than one occasion that getting output from the GPU (other than rendered onscreen) is non-trivial and has performance implications. Retrieving data involves a call to GetData which will cause a pipeline stall.
So it can be done. The XNA framework will let you write shaders for the 360 (which supports Shader Model 3.0 plus a few extensions) and it is possible to get those results out, though it may not be efficient enough for your needs.
As was stated above - XBox360 is fully capable of any HLSL calculation and specifically, it can handle Vertex and Pixel shader model 3 instructions and has an enhanced set of instructions that are specific to the platform.
Since HLSL is actually vector based you have all the tools you need - dot, cross, vector operations and matrix calculations. If you want to send calculations to the GPU and edit/use the results on the CPU you can write to texture and then fetch it on the CPU side and decode it - using it for particles or physical interactions (such as water) are few of the occasions when you might want to do so.
I wonder if there were any new results (articles, source code etc.) on the subject matter (doing some form of CUDA-like computation) on Xbox(es). There was one research paper with a source code promise (LGP on GPGPU) - apparently gone with the wind - or maybe not ? :-) and one more general (LGP on Xbox 360) but both dependent on XNA.
Oh and for these two blog articles trying so hard to turn people away from God forbid :-) using Xbox for anything but yet another triangle coloring they still exist here : 1st, 2nd for all their tragicomic value of contrived "examples". No one really wanted to "draw a primitive" or anything so pedestrian, or go via XNA at all (if they could help it :-)
The XNA was meant to slow down things while creating the appearance of openness but that's also a historical note and the real questions is whether anyone has done anything along these lines. There are much stronger Xboxes these days but that may not mean much unless basic CUDA-like access has been relaxed.
The most tragic thing about the whole Xbox GPU usage blocking is that it was Xbox itself that ended up desperately needing just a bit of help from its own GPU for a thing that was magical and shining at the time and then got suffocated (Kinnect). All it needed was for Xbox API to open just a little door for basic CUDA-like computation and someone would have written efficient contour/skeleton constructor and smoother for free in the matter of months (including insiders, no open source "interruptions" :-) just because it was a little bit of magic.
Kinnect was initially promised something like 10% of the GPU (reducing 3D grayscale to contour and skeleton is image processing) and it didn't need more than a few percent (with efficient data loading and reading, grayscale 640x480 to 240 cores at 2ns per 3D op). Meaning that there was initial API which was removed/blocked - out of fear that some sword won't be "metalic enough" ? :-))
By the time MS opened at least the Kinnect "protocol" it was too late for everything (skeleton too shaky, too slow, no way to turn it off and reprocess raw data) but I can't help wondering if some people maybe continued doing something or maybe someone latter published some of that "forbidden" info.