I need to optimize this function which get called frequently:
private static InterstingDataValues CalculateFor(IData data)
{
InterstingDataValues dataValues = new InterstingDataValues(null);
float[] pixels = data.ReadAsFloatBuffer();
if (pixels == null)
{
return null;
}
float value1 = pixels[0];
if (float.IsNaN(value1))
{
return null;
}
dataValues.HighestIntensityInData = float.MinValue;
dataValues.LowestIntensityInData = float.MaxValue;
for (int i = 0; i < pixels.Length; ++i)
{
float pixelf = pixels[i];
if (float.IsNaN(pixelf))
{
pixelf = 0;
}
dataValues.SumIntensity += (uint)pixelf;
dataValues.HighestIntensityInData = Math.Max(dataValues.HighestIntensityInData, pixelf);
dataValues.LowestIntensityInData = Math.Min(dataValues.LowestIntensityInData, pixelf);
}
dataValues.AverageIntensity = dataValues.SumIntensity / (uint)pixels.Count();
if (double.IsNaN(dataValues.HighestIntensityInData))
{
dataValues.HighestIntensityInData = float.MaxValue;
}
if (double.IsNaN(dataValues.LowestIntensityInData))
{
dataValues.LowestIntensityInData = 0;
}
return dataValues;
}
I notice C# has inbuilt functions like
pixels.Max()
pixels.Min()
pixels.Sum()
pixels.Average()
Which i would assume to be well optimized. However my feeling is calling these separately would be much more inefficient then doing it together.
My current thinking is to send blocks of the array off to separate threads to get min/max/sum. Then when I get the results for the blocks, I can run min,max,sum the results on the blocks.
But i have a feeling that C# will have some inbuilt way of doing this through Parallel.For, but I get worried at answers to this
due to the word "interlocked" I need to do some more digging into that, however I am wondering if I am on the right track.
Thanks, Chris