There are a lot of weaknesses in the question – too much to sort this out in comments...
This question is tagged C++. Why the error-prone fprintf()
? Why not std::fstream
? It has similar capabilities (if not even more) but adds type-safety (which printf()
family cannot provide).
The counter-part of fprintf()
is fscanf()
. The formatters are similar but the storage type has to be configured in formatters even more carefully than in fprintf()
.
If the first code sample is the attempt to read pixels back from datafile.x
... Why datafile = fopen("pixeldata.x" , "w");
? To open a file with fopen()
for reading, it should be "r"
.
char Image2[14] = "image_out.bmp";
is correct (if I counted correctly) but maintenance-unfriendly. Let the compiler do the work for you:
char Image2[] = "image_out.bmp";
To provide storage for pixel data with (in OPs case) fixed size of 512 × 512 bytes, the simplest would be:
unsigned char pixeldata[512 * 512];
Storing an array of that size (512 × 512 = 262144 Bytes = 256 KByte) in a local variable might be seen as potential issue by certain people. The alternative would be to use a std::vector<unsigned char> pixeldata;
instead. (std::vector
allocates storage dynamically in heap memory where local variables usually on a kind of stack memory which in turn is usually of limited size.)
Concerning the std::vector<unsigned char> pixeldata;
, I see two options:
definition with pre-allocation:
std::vector<unsigned char> pixeldata(512 * 512);
so that it can be used just like the array above.
definition without pre-allocation:
std::vector<unsigned char> pixeldata;
That would allow to add every read pixel just to the end with std::vector::push_back()
.
May be, it's worth to reserve the final size beforehand as it's known from beginning:
std::vector<unsigned char> pixeldata;
pixeldata.reserve(512 * 512); // size reserved but not yet used
So, this is how it could look finally:
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <vector>
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"
int main()
{
const int w = 512, h = 512;
// read data
FILE *datafile = fopen("pixeldata.x" , "r");
if (!datafile) { // success of file open should be tested ALWAYS
std::cerr << "Cannot open 'pixeldata.x'!\n";
return -1; // ERROR! (bail out)
}
typedef unsigned char uchar; // for convenience
std::vector<uchar> pixeldata(w * h);
char Image2[] = "image_out.bmp";
for (int i = 0, n = w * h; i < n; ++i) {
if (fscanf(datafile, "%hhx", &pixeldata[i]) < 1) {
std::cerr << "Failed to read value " << i << of 'pixeldata.x'!\n";
return -1; // ERROR! (bail out)
}
}
fclose(datafile);
// write BMP image
stbi_write_bmp(Image2, w, h, 1, pixeldata.data());
// Actually, success of this should be tested as well.
// done
return 0;
}
Some additional notes:
Please, take this code with a grain of salt. I haven't compiled or tested it. (I leave this as task to OP but will react on "bug reports".)
I silently removed using namespace std;
: SO: Why is “using namespace std” considered bad practice?
I added checking of success of file operations. File operations are something which are always good for failing for a lot of reasons. For file writing, even the fclose()
should be tested. Written data might be cached until file is closed and just writing the cached data to file might fail (because just this might overflow the available volume space).
OP used magic numbers (image width and size) which is considered as bad practice. It makes code maintenance-unfriendly and might be harder to understand for other readers: SO: What is a magic number, and why is it bad?