I’m writing software for a cortex m4 microcontroller. I’d like to resize a jpeg image - given as an example a jpeg image with MCUs of 16x16/256 pixels. Is it reasonable to reduce each MCU down to an arbitrarily smaller MCU? Say from 16x16 to 1x1 or 2x2 or 3x3. My thinking was that I could merge/average the color information among the 256 pixels until I’ve reduced the MCU to my desired size. Is this reasonable or naive way of resizing my image.
I’ve started thinking through this problem and would like some advice on thoughts for my approach.