0

I am measuring performance and it seems window's CreateFile function call is actually not cheap and seems to depend on the file size? For a 400-500k file it's taking about 0.3 ms while a 100-200k file takes about 0.2ms. Any idea why this is the case? I would think CreateFile would just open a handle to the file and doesn't need to do any traversing? Any idea how to reduce this time? I cannot use fopen since I am using the Overlapped feature to do async IO.

Thanks

user1181950
  • 779
  • 1
  • 9
  • 21
  • 2
    `fopen` calls `CreateFile`, so it can't be any faster. – Ben Voigt Feb 04 '16 at 20:13
  • Is *everything* the same except for file size? Files that are deeper in the directory structure, have more complex permissions, etc may be more expensive to open. Caching can have an effect. So can context switching. How many benchmark attempts did you average to reach those numbers? – Ben Voigt Feb 04 '16 at 20:15
  • This is probably a duplicate of http://stackoverflow.com/q/7430959/103167 – Ben Voigt Feb 04 '16 at 20:18
  • yes the only difference is the file size (and contents). The files are all at the same level & I am using rammap to clear the cache between each run. I've run it multiple times, so it's not a one time thing. – user1181950 Feb 04 '16 at 20:22
  • 1
    Probably your antivirus then. A virus scan definitely takes longer to process a larger file. – Ben Voigt Feb 04 '16 at 20:23
  • i saw that other topic as well, but NTCreateFile doesn't seem to have a flag option for overlapped IO, so I am not sure if it can do async IO? – user1181950 Feb 04 '16 at 20:24
  • dont have antivirus running :( – user1181950 Feb 04 '16 at 20:25
  • Overlapped is the default for the lower level apis. There are flags for synchronous io – Ben Voigt Feb 04 '16 at 20:29
  • I can imagine CreateFile being a little slower for files that are more heavily fragmented, which will tend to correlate with file size. – Harry Johnston Feb 05 '16 at 23:17

0 Answers0