0

I’m writing software for a cortex m4 microcontroller. I’d like to resize a jpeg image - given as an example a jpeg image with MCUs of 16x16/256 pixels. Is it reasonable to reduce each MCU down to an arbitrarily smaller MCU? Say from 16x16 to 1x1 or 2x2 or 3x3. My thinking was that I could merge/average the color information among the 256 pixels until I’ve reduced the MCU to my desired size. Is this reasonable or naive way of resizing my image.

I’ve started thinking through this problem and would like some advice on thoughts for my approach.

  • There's some good answers for image downscaling alogrithms here: [Image downscaling algorithm](https://stackoverflow.com/a/9570971/11542834) – Dash Nov 26 '22 at 02:51
  • I'm reading the JPEG standard, and it's not clear to me that it's legal to reduce the size of an MCU like that. It seems like it only supports 8x8 MCUs, except when using chroma sampling, where it uses 16x16 MCUs. Do you have enough memory to hold an entire row of MCUs at once? – Nick ODell Nov 26 '22 at 03:06
  • To be clear I’m attempting to reduce the size of the MCU after it has already been decoded. So maybe technically it wouldn’t be right to consider it an “MCU” at this point. I would be operating on a 16x16 or 8x8 matrix of color values. Can’t see where the issue might come from. I may have enough memory for a row of MCUs. – roymoran Nov 26 '22 at 03:49

0 Answers0