I have a dynamically allocated array of a struct
with 17 million elements. To save it to disk, I write
fwrite(StructList, sizeof(Struct), NumStructs, FilePointer)
In a later step I read it with an equivalent fread
statement, that is, using sizeof(Struct)
and a count of NumStructs
. I expect the resulting file will be around 3.5 GB (this is all x64).
Is it possible instead to pass sizeof(Struct) * NumStructs
as the size and 1
as the count to speed this up? I am scratching my head as to why the write operation could possibly take minutes on a fast computer with 32 GB RAM (plenty of write cache). I've run home-brew benchmarks and the cache is aggressive enough that 400 MB/sec for the first 800 MB to 1 GB is typical. PerfMon shows it is consuming 100% of one core during the fwrite.
I saw the question here so what I'm asking is, whether there is some loop inside fwrite that can be "tricked" to go faster by telling it to write 1 element of size n*s as opposed to n elements of size s.
EDIT
I ran this twice in release mode and both times I gave up waiting. Then I ran it in debug mode knowing that typically the fwrite
operations take way longer. The exact size of the data to be written is 4,368,892,928 bytes. In all three cases, PerfMon shows two bursts of disk write activity about 30 seconds apart, after which the CPU goes to 100% of one core. The file is at that point 73,924,608 bytes. I have breakpoints on either side of the fwrite
so I know that's where it's sitting. It certainly seems that something is stuck but I will leave it running overnight and see.
EDIT
Left this overnight and it definitely hung in fwrite
, the file never went past 70 MB.