1

I'm trying to calculate the surface area of a player using Kinect for Xbox 360 and SDK 1.8. I've ran into a small hiccup where I need to convert an short[] to int[] and vice versa. to explain more...

        short[] rawDepthData = new short[depthFrame.PixelDataLength];
        depthFrame.CopyPixelDataTo(rawDepthData);

But I need the rawDepthData in int[] format..

Help appreciated Many Thanks

envyM6
  • 1,099
  • 3
  • 14
  • 35

2 Answers2

4

For something that you probably want to be as fast as possible, you might want to consider just writing your own loop to copy the short[] array into a new int[] array, for example:

public static int[] Convert(short[] input)
{
    int[] result = new int[input.Length];

    for (int i = 0; i < input.Length; ++i)
        result[i] = input[i];

    return result;
}

Here's how you'd convert back. IMPORTANT: This assumes that the int values will each fit into a short. If they do not, they will be truncated and the values will change!

public static short[] Convert(int[] input)
{
    short[] result = new short[input.Length];

    unchecked
    {
        for (int i = 0; i < input.Length; ++i)
            result[i] = (short) input[i];
    }

    return result;
}

(Note: When comparing the performance of this with that of Array.ConvertAll(), you must compile this code as a RELEASE build - otherwise you'd be comparing a debug build with a release build.)

Matthew Watson
  • 104,400
  • 10
  • 158
  • 276
2

You can use Linq:

var myIntData = rawDepthData.Select(x => (int)x).ToArray();

Or simply a for loop (which would be MUCH faster), if you aim for performance:

var myIntData = new int[rawDepthData.Length];
for(var i = 0; i < rawDepthData.Length; i++)
   myIntData[i] = rawDepthData[i];

If striving for performance, I'd use the for method, and also have my int[] array ready (and not allocate one new every time), since the raw depth data length is more likely not changing

Another more performant way to do it would be using unsafe, pointers and unrolling the loop:

unsafe static void CopyData(short[] input, int[] output)
{
    fixed(short* pt1 = input)
    fixed(int* pt2 = output)
    {
        short* pt1f = pt1;
        int* pt2f = pt2;
        for (int i = 0; i < input.Length/8; i ++)
        {
            *pt2f = *pt1f;
            *(pt2f + 1) = *(pt1f + 1);
            *(pt2f + 2) = *(pt1f + 2);
            *(pt2f + 3) = *(pt1f + 3);
            *(pt2f + 4) = *(pt1f + 4);
            *(pt2f + 5) = *(pt1f + 5);
            *(pt2f + 6) = *(pt1f + 6);
            *(pt2f + 7) = *(pt1f + 7);
            pt1f += 8;
            pt2f += 8;
        }
    }
}

While a bit ugly for the .NET framework, this should be the fastest way. I've actually made some profiling (not scientific by any means), and on some of my tests, with different array sizes and loops, I get a ~30-40% increase on the unsafe one (notice that without unrolling the loop, I get a very marginal performance increase of around 5-10%, depending on array size)... this actually assumes the length is divisible by 8, you can adjust or check as necessary :-)

Profiling

* just because I was curious and had a bit of time right now

Since I was curious, this is the result of some of my profilings over different methods (100% is the fastest method, "iterations" is the number of times I make the copy operation... I verify the data but that's outside the performance calculation), also, the arrays are pre-allocated (not allocated on every iteration). I left Linq out since it was WAY slower.

  1. 100,000 iterations with array length of 80,000

Unsafe unrolled: 100% - 00:00:02.5536184

Simple for() loop: 70,88% - 00:00:03.6024272

foreach(...): 59,24% - 00:00:04.3105598

Unrolled for(): 38,26% - 00:00:06.6739715

  1. 1,000,000 iterations with array length of 8,000

Unsafe unrolled: 100% - 00:00:02.3733392

Simple for() loop: 67,70% - 00:00:03.5055304

foreach(...): 55,00% - 00:00:04.3149544

Unrolled for(): 39,53% - 00:00:06.0041744

  1. 10,000 iterations with array length of 800,000

Unsafe unrolled: 100% - 00:00:02.5565005

Simple for() loop: 73,69% - 00:00:03.4688333

foreach(...): 59,75% - 00:00:04.2783304

Unrolled for(): 39,46% - 00:00:06.4778782

I'm actually pretty surprised that the safe "unrolled" for is the most expensive method... thinking about it, it makes sense, but at first sight (at least to me, coming from an x86 assembly background where unrolling loops for performance was pretty common back in the days) I'd have never said so. I also didn't expect foreach to be all that much slower than a simple for.

Jcl
  • 27,696
  • 5
  • 61
  • 92
  • I'll add note that `for` is definitely much faster – Jcl Apr 25 '16 at 14:18
  • Also note that Cast() is in the his case more appropriate and faster than Select () – Adriano Repetti Apr 25 '16 at 14:21
  • @AdrianoRepetti I had suggested `Cast` (see the edit history), but it won't work to convert a `short` to an `int`, `Cast` does boxing and unboxing, and a boxed object can't be unboxed to a different type... it'll throw an `InvalidCastException` – Jcl Apr 25 '16 at 14:24
  • Thank you, as soon as I will be back to computer I will check its implementation, I see no need for boxing. Pretty surprising! – Adriano Repetti Apr 25 '16 at 14:30
  • @AdrianoRepetti : it's mentioned in [this answer](http://stackoverflow.com/a/1684514/68972) – Jcl Apr 25 '16 at 14:40
  • Thank you for the reference, you saved me the time to check on my own. Good catch with unsafe unrolled loop – Adriano Repetti Apr 25 '16 at 15:03