You can use Linq:
var myIntData = rawDepthData.Select(x => (int)x).ToArray();
Or simply a for
loop (which would be MUCH faster), if you aim for performance:
var myIntData = new int[rawDepthData.Length];
for(var i = 0; i < rawDepthData.Length; i++)
myIntData[i] = rawDepthData[i];
If striving for performance, I'd use the for
method, and also have my int[]
array ready (and not allocate one new every time), since the raw depth data length is more likely not changing
Another more performant way to do it would be using unsafe
, pointers and unrolling the loop:
unsafe static void CopyData(short[] input, int[] output)
{
fixed(short* pt1 = input)
fixed(int* pt2 = output)
{
short* pt1f = pt1;
int* pt2f = pt2;
for (int i = 0; i < input.Length/8; i ++)
{
*pt2f = *pt1f;
*(pt2f + 1) = *(pt1f + 1);
*(pt2f + 2) = *(pt1f + 2);
*(pt2f + 3) = *(pt1f + 3);
*(pt2f + 4) = *(pt1f + 4);
*(pt2f + 5) = *(pt1f + 5);
*(pt2f + 6) = *(pt1f + 6);
*(pt2f + 7) = *(pt1f + 7);
pt1f += 8;
pt2f += 8;
}
}
}
While a bit ugly for the .NET framework, this should be the fastest way. I've actually made some profiling (not scientific by any means), and on some of my tests, with different array sizes and loops, I get a ~30-40% increase on the unsafe
one (notice that without unrolling the loop, I get a very marginal performance increase of around 5-10%, depending on array size)... this actually assumes the length is divisible by 8, you can adjust or check as necessary :-)
Profiling
* just because I was curious and had a bit of time right now
Since I was curious, this is the result of some of my profilings over different methods (100% is the fastest method, "iterations" is the number of times I make the copy operation... I verify the data but that's outside the performance calculation), also, the arrays are pre-allocated (not allocated on every iteration). I left Linq out since it was WAY slower.
- 100,000 iterations with array length of 80,000
Unsafe unrolled: 100% - 00:00:02.5536184
Simple for() loop: 70,88% - 00:00:03.6024272
foreach(...): 59,24% - 00:00:04.3105598
Unrolled for(): 38,26% - 00:00:06.6739715
- 1,000,000 iterations with array length of 8,000
Unsafe unrolled: 100% - 00:00:02.3733392
Simple for() loop: 67,70% - 00:00:03.5055304
foreach(...): 55,00% - 00:00:04.3149544
Unrolled for(): 39,53% - 00:00:06.0041744
- 10,000 iterations with array length of 800,000
Unsafe unrolled: 100% - 00:00:02.5565005
Simple for() loop: 73,69% - 00:00:03.4688333
foreach(...): 59,75% - 00:00:04.2783304
Unrolled for(): 39,46% - 00:00:06.4778782
I'm actually pretty surprised that the safe "unrolled" for
is the most expensive method... thinking about it, it makes sense, but at first sight (at least to me, coming from an x86 assembly background where unrolling loops for performance was pretty common back in the days) I'd have never said so. I also didn't expect foreach
to be all that much slower than a simple for
.