Preface for the lazy:
The final (working) algorithm can be found in the last source code listing.
I documented the steps before also, because the questioner stated to be "not a programmer".
(Btw. I'm not feeling able to explain the final code without the steps before...)
Introduction
Some years ago, I found an article in the German computer journal c't about up-scaling and down-scaling of RGB images. These algorithms became part of my personal library and I used them from time to time e.g. for adjusting size of images in our software - mostly to well-prepare OpenGL textures.
The basic idea of this article was to consider the spatial ratio with which a source pixel (imagined as square) covers a destination pixel (or vice versa). Therefore the author distinguished up scale and down scale. The consideration of partly covering pixels was done using float values.
When reading the question, I realized two special cases:
the requirement to deal with bitmaps (due to monochrom LCD output)
the ratio of source to destination width and height is 75/16.
The ratio 75/16 means that 75×75 source pixels map to 16×16 destination pixels, e.g. 4.6875×4.6875 source pixels to one destination pixel. Therefore, there are pixels in the source image which map partly to two or even four neighbouring destination pixels.
Concerning your special requirements, I got the idea that in this special case it should be possible to do it with integer arithmetic only. (According to your hint (destination platform is an embedded CPU), this should be welcome as these usually don't provide native floating point instructions.)
Mastering 1D
For warm-up, I started with
bytes instead of bits
implemented the down-scale of a one line image:
The idea is to accumulate source pixel values to a gray level in the range [0,75] which then is binarized again using a binary threshold.
#include <iostream>
// convenience type for a byte
typedef unsigned char uint8;
// ratio of source image size and destination image size
enum { nR = 75, dR = 16 };
// source image size
enum { wSrc = 1 * nR };
// destination image size
enum { wDst = dR * wSrc / nR };
// binary threshold
enum { tBin = nR / 2 };
// source image
static uint8 imgSrc[wSrc] = {
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 0 ... 15
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 16 ... 31
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 32 ... 47
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, // 48 ... 63
1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0 // 64 ... 74
};
// destination image
static uint8 imgDst[wDst];
// returns a source pixel.
inline int getPixel(int x) { return imgSrc[x]; }
// stores a destination pixel
inline void setPixel(int x, int value)
{
imgDst[x] = !!value; // forces destination value to 0 or 1
}
// prints an image.
void printImg(
int w, // width of image
const uint8 *img) // the image data
{
for (int x = 0; x < w; ++x) std::cout << (char)('0' + img[x]);
std::cout << std::endl;
}
// main function.
int main()
{
// print source image for visual check
std::cout << "Source image (" << wSrc << "):" << std::endl;
printImg(wSrc, imgSrc);
// scale x
int xSrc = 0; int n = 0;
for (int xDst = 0; xDst < wDst; ++xDst) {
int value = 0; // destination pixel accumulator
// process right of cut pixel
if (n) { value += (dR - n) * getPixel(xSrc); ++xSrc; n -= dR; }
n += nR;
// process full pixels
for (; n >= dR; ++xSrc, n -= dR) value += dR * getPixel(xSrc);
// process left of cut pixel
if (n) value += n * getPixel(xSrc);
// store value: 0 ... tBin -> 0, tBin + 1 ... wSrc -> 1
setPixel(xDst, value >= tBin);
}
// print destination image for visual check
std::cout << "Destination image (" << wDst << "):"
<< std::endl;
printImg(wDst, imgDst);
// done
return 0;
}
I compiled and tested in VisualStudio 2013 and got the following output:
Source image (75):
111111110000000011111111000000001111111100000000111111110000000011111111000
Destination image (16):
1101100110110010
Remembering that roughly 5 source pixels map to 1 destination pixel, the output looks quite sufficient for me.
Extension to 2D
The next step was to extend the first sample for two-dimensional images. I soon realized that my accumulation approach had to be extended to a full destination image row. This was achieved using a values
array instead of a single value
. Following my first approach, the source image rows which are splitted have to be processed twice. To prevent code duplication I introduced helper functions for this: accuPixel()
and accuRow()
.
#include <cassert>
#include <iostream>
// convenience type for a byte
typedef unsigned char uint8;
// convenience type for an image
struct Image {
int w, h; // width and height of image
uint8 *data; // image data
int getPixel(int x, int y) const
{
assert(x >= 0 && x < w);
assert(y >= 0 && y < h);
return data[x * w + y];
}
void setPixel(int x, int y, int value)
{
assert(x >= 0 && x < w);
assert(y >= 0 && y < h);
data[x * w + y] = !!value; // '!!' forces dest. value to 0 or 1
}
void print() const
{
for (int y = 0; y < h; ++y) {
for (int x = 0; x < w; ++x) {
std::cout << (char)('0' + data[y * w + x]);
}
std::cout << std::endl;
}
}
};
// ratio of source image size and destination image size
enum { nR = 75, dR = 16 };
// source image size
enum { wSrc = 1 * nR, hSrc = 1 * nR };
// destination image size
enum { wDst = dR * wSrc / nR, hDst = dR * hSrc / nR };
// binary threshold
enum { tBin = nR * nR / 2 };
// source image
static uint8 dataSrc[wSrc * hSrc];
static Image imgSrc = {
/* int w, h: */ wSrc, hSrc,
/* uint8 *data: */ dataSrc
};
// destination image
static uint8 dataDst[wDst * hDst];
static Image imgDst = {
/* int w, h: */ wDst, hDst,
/* uint8 *data: */ dataDst
};
/* accumulates value for a destination pixel from the according number
* of source pixels in one source image row.
*/
void accuPixel(
int &value, // the accumulation value (updated)
const Image &imgSrc, // the source image
int &xSrc, // column index of source pixels (updated)
int ySrc, // row index of source pixels
int &n, // counter of accumulated values (updated)
int fY) // vertical weight of row
{
// process right part of cut pixel
if (n) {
value += fY * (dR - n) * imgSrc.getPixel(xSrc, ySrc);
++xSrc; n -= dR;
}
n += nR;
// process full pixels
for (; n >= dR; ++xSrc, n -= dR) {
value += fY * dR * imgSrc.getPixel(xSrc, ySrc);
}
// process left part of cut pixel
if (n) value += fY * n * imgSrc.getPixel(xSrc, ySrc);
}
/* accumulates values for one destination image row from one source
* image row.
*/
void accuRow(
int wDst, // width of destination image
int *values, // accumulation values for destination row
const Image &imgSrc, // the source image
int ySrc, // row index of source pixels
int fY) // vertical weight of row
{
for (int xSrc = 0, n = 0, xDst = 0; xDst < wDst; ++xDst) {
accuPixel(values[xDst], imgSrc, xSrc, ySrc, n, fY);
}
}
// main function
int main()
{
// fill source image with a chess board pattern
for (int y = 0; y < hSrc; ++y) {
for (int x = 0; x < wSrc; ++x) {
imgSrc.setPixel(x, y, (x % 16 < 8) == (y % 16 < 8));
}
}
// print source image for visual check
std::cout << "Source image (" << wSrc << 'x' << hSrc << "):"
<< std::endl;
imgSrc.print();
// scale source image to destination image
int ySrc = 0; int m = 0;
for (int yDst = 0; yDst < hDst; ++yDst) {
int values[wDst];
for (int &value : values) value = 0; // init accu values
// process bottom of cut row
if (m) {
accuRow(imgDst.w, values, imgSrc, ySrc, dR - m);
++ySrc; m -= dR;
}
m += nR;
// process full rows
for (; m >= dR; ++ySrc, m -= dR) {
accuRow(imgDst.w, values, imgSrc, ySrc, dR);
}
// process top of cut row
if (m) accuRow(imgDst.w, values, imgSrc, ySrc, m);
// process accumulated values
for (int xDst = 0; xDst < wDst; ++xDst) {
imgDst.setPixel(xDst, yDst, values[xDst] >= tBin);
}
}
// print destination image for visual check
std::cout << "Destination image (" << wDst << 'x' << hDst << "):"
<< std::endl;
imgDst.print();
// done
return 0;
}
The output of the program is (as well as the input) a checker board. Though, the output checker board does not have equal sized cells due to the interpolation and the following binary separation.
The Actual Down-Scaling of Bit Maps
After the scaling does what I expected, the sample code gots its finish:
The Image
class was modified to support bit maps. This would have been easy if I had used std::vector<bool>
(the specialized version of std::vector<>
) which packs values as bits. This had probably simplified parts of code. I decided against std::vector<bool>
because I'm uncertain how the data is provided in the OP. I believe, my "explicit" C++ sample code is simpler to adapt to the existing data model on the platform of the questioner.
I considered file I/O to make the sample more flexible. I'm not sure about the image format in the OP. My first thought was that XMP
were simply a typo meaning XPM
. But then I became suspicious and googled a little bit. Thus, I found XMP. Could this be meant? If I understood it right the XMP is a standard for meta data which might be added to certain image formats like JPEG and TIFF. So, I'm still uncertain...
To come around this, I decided to use a file format instead for which the loading and saving would need only a few lines of code: PBM.
Once I had implemented the PBM I/O I struggled over two issues which IMHO are worth to be noted:
If the length of image row is not a multiple of 8: Are the rows byte aligned or not? Thus, I added the _bPR // bits per row
member to my Image
class. In the case of PBM, the rows are byte aligned. (I converted the Wikipedia 'J' sample image with GIMP from ASCII to RAW version to check this out.)
The first working version (which didn't crash) produced an output image which looked not completely wrong but somehow "wrong in stripes". Thus, I came to the conclusion that I stored the bits per byte in wrong order. (From two possible solutions I initially chose the wrong one.) The correct way is that the most left pixel in a byte has to be stored in the most significant bit of it. (In the opposite case, the bit shifting in Image::getPixel()
and Image::setPixel()
had to be changed. I left the (in case of PBM) wrong versions as disabled code, just for the case.)
The final sample code:
#include <cassert>
#include <iostream>
#include <fstream>
#include <sstream>
#include <string>
// convenience type for bytes
typedef unsigned char uint8;
// image helper class
class Image {
private: // variables:
int _w, _h; // image size
int _bPR; // bits per row
uint8 *_data; // image data
public: // methods:
// constructor.
Image(): _w(0), _h(0), _bPR(0), _data(nullptr) { }
// destructor.
~Image() { free(); }
// returns width of image.
int w() const { return _w; }
// returns height of image.
int h() const { return _h; }
// returns data.
const uint8* data() const { return _data; }
// returns data size (in bytes).
size_t size() const { return (_h * _bPR + 7) / 8; }
// clears image.
void free()
{
delete[] _data; _data = 0; _w = _h = _bPR = 0;
}
// allocates image data.
uint8* alloc( // returns allocated buffer or 0 in case of error
int w, // image width
int h, // image height
int bPR) // bits per row
{
assert(w >= 0 && w <= bPR);
assert(h >= 0);
free();
size_t size = (h * bPR + 7) / 8;
if (size && (_data = new uint8[size])) {
_w = w; _h = h; _bPR = bPR;
}
return _data;
}
// returns pixel.
int getPixel(
int x, // column
int y) // row
const {
assert(x >= 0 && x < _w);
assert(y >= 0 && y < _h);
#if 0 // wrong for PBM
int b = y * _bPR + x, bit = b % 8; // most left pixel is LSB
#else // correct for PBM
int b = y * _bPR + x, bit = 7 - b % 8; // most left pixel is MSB
#endif // 0
return _data[b / 8] >> bit & 1;
}
// sets pixel.
void setPixel(
int x, // column
int y, // row
int value) // value (should be 0 or 1)
{
assert(x >= 0 && x < _w);
assert(y >= 0 && y < _h);
int b = y * _bPR + x;
#if 0 // wrong for PBM
uint8 *pB = _data + b / 8, bit = b % 8; // most left pixel is LSB
#else // correct for PBM
uint8 *pB = _data + b / 8, bit = 7 - b % 8; // most left pixel is MSB
#endif // 0
*pB &= (uint8)~(1 << bit); *pB |= !!value << bit; // bit fiddling
}
};
// reads a PBM binary file.
void readPBM(
std::istream &in, // input stream (to read from)
Image &img) // image to store read data into
{
std::string buffer;
std::getline(in, buffer);
if (buffer != "P4") {
throw "ERROR! File is not a PBM binary file.";
}
do {
std::getline(in, buffer);
} while (buffer[0] == '#');
std::istringstream sIn(buffer);
int w = 0, h = 0;
sIn >> w >> h;
// PBM stores rows aligned to bytes
int bitsPerRow = (w + 7) & ~0x7;
// allocate data memory
char *data = (char*)img.alloc(w, h, bitsPerRow);
// read rest of file at once
in.read(data, img.size());
}
// writes a PBM binary file.
void writePBM(
std::ostream &out, // output stream (to write to)
const Image &img) // image which shall be written
{
out << "P4" << std::endl
<< img.w() << ' ' << img.h() << std::endl;
out.write((const char*)img.data(), img.size());
}
// converts a text to an integer.
int strToI( // returns the integer or throws
const char *text) // text to convert
{
const char *end = text; int value = strtol(text, (char**)&end, 0);
if (end == text || *end != '\0') throw "Not a number.";
return value;
}
/* accumulates value for a destination pixel from the according number
* of source pixels in one source image row.
*/
void accuPixel(
int &value, // the accumulation value (updated)
const Image &imgSrc, // the source image
int &xSrc, // column index of source pixels (updated)
int ySrc, // row index of source pixels
int &n, // counter of accumulated values (updated)
int fY, // vertical weight of row
int nR, // numerator of ratio (source to destination image size)
int dR) // denominator of ratio (source to destination image size)
{
// process right part of cut pixel
if (n) {
value += fY * (dR - n) * imgSrc.getPixel(xSrc, ySrc);
++xSrc; n -= dR;
}
n += nR;
// process full pixels
for (; n >= dR; ++xSrc, n -= dR) {
value += fY * dR * imgSrc.getPixel(xSrc, ySrc);
}
// process left part of cut pixel
if (n) value += fY * n * imgSrc.getPixel(xSrc, ySrc);
}
/* accumulates values for one destination image row from one source
* image row.
*/
void accuRow(
int wDst, // width of destination image
int *values, // accumulation values for destination row
const Image &imgSrc, // the source image
int ySrc, // row index of source pixels
int fY, // vertical weight of row
int nR, // numerator of ratio (source to destination image size)
int dR) // denominator of ratio (source to destination image size)
{
for (int xSrc = 0, n = 0, xDst = 0; xDst < wDst; ++xDst) {
accuPixel(values[xDst], imgSrc, xSrc, ySrc, n, fY, nR, dR);
}
}
// scales source image to destination image.
void scale(
const Image &imgSrc, // source image
Image &imgDst, // destination image
int nR, // numerator of ratio (source to destination image size)
int dR, // denominator of ratio (source to destination image size)
int tBin) // binary threshold e.g. nR * nR / 2
{
// allocate space for destination image
const int wDst = dR * imgSrc.w() / nR;
const int hDst = dR * imgSrc.h() / nR;
if (!imgDst.alloc(wDst, hDst, wDst + 7 & ~7)) {
throw "ERROR! Allocation of destination image failed!";
}
int *values = new int[wDst]; // aux. buffer to accumulate values
for (int ySrc = 0, m = 0, yDst = 0; yDst < hDst; ++yDst) {
// init accu values
for (int i = 0; i < wDst; ++i) values[i] = 0;
// process bottom of cut row
if (m) {
accuRow(wDst, values, imgSrc, ySrc, dR - m, nR, dR);
++ySrc; m -= dR;
}
m += nR;
// process full rows
for (; m >= dR; ++ySrc, m -= dR) {
accuRow(wDst, values, imgSrc, ySrc, dR, nR, dR);
}
// process top of cut row
if (m) accuRow(wDst, values, imgSrc, ySrc, m, nR, dR);
// process accumulated values
for (int xDst = 0; xDst < wDst; ++xDst) {
imgDst.setPixel(xDst, yDst, values[xDst] > tBin);
}
}
delete[] values; // free aux. buffer
}
// main function
int main( // returns 0 on success and another value in error case
int argc, // number of command line arguments
char **argv) // command line arguments
{
// check for sufficient number of arguments
if (argc <= 4) {
std::cerr << "ERROR! Missing command line arguments." << std::endl;
std::cout
<< "Usage:" << std::endl
<< argv[0] << " INFILE OUTFILE NR DR" << std::endl
<< "where" << std::endl
<< "INFILE ... file name of PBM input file (must exist)" << std::endl
<< "OUTFILE ... file name of PBM output file (overwritten if existing)" << std::endl
<< "NR ... numerator of ratio (src. to dest. image size)" << std::endl
<< "DR ... denominator of ratio (src. to dest. image size)" << std::endl
<< "NR and DR must be (not too large) positive integers: 0 < DR < NR" << std::endl;
return 1; // ERROR!
}
try {
// read command line arguments
const char *fileIn = argv[1];
const char *fileOut = argv[2];
int nR;
try {
nR = strToI(argv[3]);
} catch (const char*) {
throw "ERROR in $3! (Not a number.)";
}
int dR;
try {
dR = strToI(argv[4]);
} catch (const char*) {
throw "ERROR in $4! (Not a number.)";
}
int tBin = nR * nR / 2; // might become cmd. line arg. also
// read input file
Image imgSrc;
std::ifstream fIn(fileIn, std::ios::in | std::ios::binary);
fIn.exceptions(std::ifstream::badbit);
readPBM(fIn, imgSrc);
// scale source image to destination image
Image imgDst;
scale(imgSrc, imgDst, nR, dR, tBin);
// write output file
std::ofstream fOut(fileOut, std::ios::out | std::ios::binary);
fOut.exceptions(std::ofstream::badbit);
writePBM(fOut, imgDst);
} catch (const char *error) {
std::cerr << error << std::endl;
return 1; // ERROR!
} catch (const std::exception &error) {
std::cerr << error.what() << std::endl;
return 1; // ERROR!
}
// done (probably successfully)
return 0;
}
To test the sample code, I prepared a photo of mine as sample image. The original photo shows cat Moritz playing with a screw:

I GIMPed it a little bit to get an appropriate sample image (mainly because GIMP can write, load, and display PBM files):

Although, I did all development and tests in VisualStudio 2013, the following sample session has been done with g++ (in cygwin on Windows 10 (64 bit)):
$ g++ --version
g++ (GCC) 5.4.0
$ g++ -std=c++11 -o scale-bitmap scale-bitmap.cc
$ ./scale-bitmap cat.bin.pbm out.bin.pbm 75 16
$
This produced the following output:

If I'm not wrong the sample code just implements Bilinear filtering which is probably the 2nd worst approach after simply removing rows and columns from the source image.
As the sample output illustrates, the quality of the output is rather limited. Better results might be achieved with more sophisticated processing:
A better interpolation may help. The Wikipedia articles Image scaling and Pixel art scaling algorithms provide probably a good start.
Especially for monochrome images, Dithering could be an option.
All these nice things will definitely require more development effort and code (if not used out of a library).
However, a little improvement might be achieved changing the binary threshold tBin
. I didn't try it but I can imagine this because I played with the binary threshold in GIMP when I prepared the test image...
Last but not least
While writing this doc. I found also a similar question SO: Image downscaling algorithm. ...and saw after sending this answer that the questioner already mentioned it...
If I had separated nR
and dR
for horizontal and vertical scaling the algorithm could be applied to non-ratio scaling also. It shouldn't be too hard to change this but it wasn't required in the OP.
Finally, I guessed about limitations of the target platform. Concerning the described source and destination image dimensions of the OP, the highest accumulated value will be 75 * 75 = 5625 (downscaling all source pixels with 1 - a complete white (or black?) area). These are good news because even if the C/C++ compiler for the Atmel ATmega provides 16 bit integers only, the sample code should work without harm.