You seem to be constantly mixing up your x and y offsets, which can easily be avoided simply by actually calling your loop variables x
and y
whenever you loop through image data. Also, image data is generally saved line by line, so your outer loop should be the Y loop going over the height, and the inner loop should process the X coordinates on one line, and should thus loop over the width.
Also, I'm not sure where your original data comes from, but in most of the cases I've seen where the image data is in multidimensional arrays like this, the Y is actually the first index in the array. Your actual image building function also assumes this, since it uses G.GetLength(0)
to get the height of the image. But your channel resize function doesn't; it makes a multidimensional array as new int[816, 683]
, which would be a 683*816 image, not 816*683 as you said. So that certainly seems wrong.
Since you confirmed it to be [x,y], I adapted this solution to use it like that.
That aside, you hardcoded a lot of values in your functions, which is very bad practice. If you know you will reduce the image to 1/3rd by taking only one in three pixels, just give that 3
as parameter.
The reduction code:
public static Int32[,] ResizeChannel(Int32[,] origChannel, Int32 lossfactor)
{
Int32 newWidth = origChannel.GetLength(0) / lossfactor;
Int32 newHeight = origChannel.GetLength(1) / lossfactor;
// to avoid rounding errors
Int32 origHeight = newHeight * lossfactor;
Int32 origWidth = newWidth *lossfactor;
Int32[,] newChannel = new Int32[newWidth, newHeight];
Int32 newX = 0;
Int32 newY = 0;
for (Int32 y = 1; y < origHeight; y += lossfactor)
{
newX = 0;
for (Int32 x = 1; x < origWidth; x += lossfactor)
{
newChannel[newX, newY] = origChannel[x, y];
newX++;
}
newY++;
}
return newChannel;
}
The actual build code, as was remarked by GSerg in the comments, is wrong because you don't take the stride
into account. The stride
is the actual byte length of each line of pixels, and this is not just width * BytesPerPixel
, since it gets rounded up to the next multiple of 4 bytes.
So you need to initialize your array as height * stride
, not as height * width * 3
, and you need to skip your write offset to the next multiple of the stride
whenever you go to a lower Y line, rather than assuming it will just get there automatically because your X processing adds 3 for each pixel. Because it will not get there automatically, unless, by pure coincidence, your image width happens to be a multiple of 4 pixels.
Also, if you only use one channel for this, there is no reason to give it all three channels. Just give a single one.
public static Bitmap CreateGreyImage(Int32[,] greyChannel)
{
Int32 width = greyChannel.GetLength(0);
Int32 height = greyChannel.GetLength(1);
Bitmap result = new Bitmap(width, height, PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, width, height);
BitmapData bmpData = result.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
Int32 stride = bmpData.Stride;
// stride is the actual line width in bytes.
Int32 bytes = stride * height;
Byte[] pixelValues = new Byte[bytes];
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
Int32 workOffset = offset;
for (Int32 x = 0; x < width; x++)
{
pixelValues[workOffset + 0] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 1] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 2] = (Byte)greyChannel[x, y];
workOffset += 3;
}
// Add stride to get the start offset of the next line
offset += stride;
}
Marshal.Copy(pixelValues, 0, bmpData.Scan0, bytes);
result.UnlockBits(bmpData);
return result;
}
Now, this works as expected if your R, G and B channels are indeed identical, But if they are not, you have to realize there is a difference between reducing the image to grayscale and just building a grey image from the green channel. On a colour image, you will get totally different results if you take the blue or red channel instead.
This was the code I executed for this:
Int32[,] greyar = ResizeChannel(greenar, 3);
Bitmap newbm = CreateGreyImage(greyar);