0

I have a problem with saving lot of data to binary file in c++ , i'm using CodeBlocks

My code:

long tablica[10000];
int main(int argc, char *argv[])
 {
  FILE * pFile;
  clock_t start,stop;
  srand (time(NULL));
  long i;
  for(i=0;i<10000;i++){
                 tablica[i] = rand() % 100000;
                 }
  start = clock();
  pFile = fopen ("myfile.bin","wb");
  if (pFile!=NULL)
   {
    long test = fwrite(tablica,sizeof(long),sizeof(tablica),pFile);
    int err = ferror(pFile);
    fclose(pFile);
    printf("%d",err);
   }
    double wynik;
    wynik = double(stop-start);
    printf("\n %f",wynik);
return 0;
}

It just rand lots of data , and save it to file.

Strange thing is that when compiling and running it from DEBUG profile it runs well , and when running from release it trows error num 32 Broken Pipe

I went to build options , and it seems that Produce debbuging symbols makes the diffrence , when it's on file write is sucessfull ,and if it's off i get brokenpipe.

Can some1 tell why this problem occurs and how to remove it. I need my app to be quite fast and i guess that produce debbuging symbols will slow it down.

Eaxxx
  • 25
  • 3
  • 1
    Might not be related, but you are writing too much data. `fwrite(tablica,sizeof(long),sizeof(tablica),pFile);` says write `sizeof(long)` (probably 4 or 8 bytes) `sizeof(tablica)` (10000 * 4 or 8) times. – user4581301 Jun 26 '15 at 19:34

1 Answers1

1

The problem is here:

long test = fwrite(tablica,sizeof(long),sizeof(tablica),pFile);

Calling sizeof on an array returns the size of the array in bytes, not the number of elements, as explained here: How do I determine the size of my array in C?

So you're telling fwrite to write 40000 longs, not 10000 longs.

The reason it works in debug mode but not release is probably due to checks in the debug runtime. (Though you don't specify what platform/compiler/runtime you're using, so it's hard to say for sure.)

I would fix it like so:

long test = fwrite(tablica,1,sizeof(tablica),pFile);

And you really should be checking that "test" return value.

Also, instead of trying to write all the data in one shot, I would make a series of fwrite calls in a loop, writing, say, a kilobyte (1024 bytes) at a time.

Community
  • 1
  • 1
Dan Korn
  • 1,274
  • 9
  • 14
  • Me, I'd flip that around `fwrite(tablica,sizeof(tablica), 1, pFile);` and test that that the return value was 1 – user4581301 Jun 26 '15 at 19:38
  • Yeah, either way, although I think knowing how many bytes were written is more useful. Again, you don't necessarily want to ask the runtime to try to gulp the whole thing down in one giant bite, so to speak. – Dan Korn Jun 26 '15 at 19:40
  • 1
    Agreed. If the target is a PC, I'd go for 4k per write to line up with the most common cluster size. – user4581301 Jun 26 '15 at 19:41
  • Well why i did think that 3rd paramenter of fwrite is total size of array not array count i can't tell. And yes i will do all the checking it's still work in progress , thanks for help – Eaxxx Jun 26 '15 at 19:44
  • 1
    Yet another alternative is `fwrite(tablica,sizeof(long),sizeof(tablica)/sizeof(long),pFile)`, or, in case you want to change implementation of `tablica`, use `fwrite(tablica,sizeof(tablica[0]),sizeof(tablica)/sizeof(tablica[0]),pFile)`. – David Hammen Jun 26 '15 at 19:44
  • definitely favour @DavidHammen 's second version. It costs nothing and saves future debugging time. – user4581301 Jun 26 '15 at 19:54
  • I wonder ... now i will have to read this file , should i do it in 4k line as you sugested ? and is it wise to determine the count of arry by filesize/sizeof(long) ( when i read it ) – Eaxxx Jun 26 '15 at 20:09
  • Well, since this is tagged C++, then instead of messing around with low-level C arrays, an even better alternative might be to save the data in a std::vector of long, or of char, and use its size() function. And maybe use a C++ method of writing to the file, such as std::ofstream, instead of fwrite. – Dan Korn Jun 26 '15 at 20:58