The short answer: write both and profile.
The longer answer with considerable hand-waving:
Overwriting a file will involve the following system calls:
open
write
close
Creating a new file, deleting the old file, and renaming the new file will involve the following system calls:
open
write
close
unlink
rename
System calls are often the slowest part of programs; in general, reducing system calls is a good way to speed a program. Overwriting the one file will re-use the operating system's internal directory entry data; this will probably also lead to some speed improvements. (They may be difficult to measure in a language with VM overhead...)
Your files are small enough that each write()
should be handled atomically, assuming you're updating the entire 1K in a single write. (Since you care about performance, this seems like a safe assumption.) This does mean that other processes should not see partial writes except in the case of catastrophic powerfailures and lossy mount options. (Not common.) The file re-name approach does give consistent files even in the face of multiple writes.
However, 1K files are a pretty inefficient storage mechanism; many filesystems will write files along 4k blocks. If these data blocks exist only in your application it might make sense to write them in containers of some sort, several at a time. (Quake-derived systems do this for reading their maps, textures, and so forth, out of zip files, because giant streaming IO requests are far faster than thousands of smaller IO requests.) Of course, this is harder if your application is writing these files for other applications to work with, but it might still be worth investigating if the files are rarely shared.