no, generally it is not safe to do this!
you need to obtain an exclusive write lock for each process -- that implies that all the other processes will have to wait while one process is writing to the file.. the more I/O intensive processes you have, the longer the wait time.
it is better to have one output file per process and format those files with a timestamp and process identifier in the beginning of the line, so that you can later merge and sort those output files offline.
Tip: check the file format of web-server log files -- these are done with the time-stamp at the beginning of the line, so they can be later combined and sorted.
EDIT
UNIX processes use a certain / fixed buffer size when they open files (e.g. 4096 bytes), to transfer data to and from the file on disk. Once the write-buffer is full, the process flushes it to disk -- that means: it writes the complete full buffer to disk! Please Note here that it is happening when the buffer is full! -- not when there is an end-of-line! That means even for a single process which writes line-oriented text data to file, that those lines are typically cut somewhere in the middle at the time the buffer is flushed. Only at the end, when the file is closed after writing, can you assume that the file contains complete lines!
So depending on when your process decide to flush their buffers, they write at different times to the file -- e.g. the order is not deterministic / predictable When a buffer is flushed to file, you can not assume that it will only write complete lines -- e.g. it will usually write partial lines, thereby messing up the output if several processes flush their buffers without synchronization.
Check this article on Wikipedia: http://en.wikipedia.org/wiki/File_locking#File_locking_in_UNIX
Quote:
The Unix operating systems (including Linux and Apple's Mac OS X,
sometimes called Darwin) do not normally automatically lock open files
or running programs. Several kinds of file-locking mechanisms are
available in different flavors of Unix, and many operating systems
support more than one kind for compatibility. The two most common
mechanisms are fcntl(2) and flock(2). A third such mechanism is
lockf(3), which may be separate or may be implemented using either of
the first two primitives.
You should use either flock, or Mutexes to synchronize the processes and make sure only one of them can write to the file at a time.
As I mentioned earlier, it is probably faster, easier and more straight-forward to have one output file for each process, and then later combine those files if needed (offline). This approach is used by some web-servers for example, which need to log to multiple files from multiple threads -- and need to make sure that the different threads are all high-performing (e.g. not having to wait for each other on a file lock).
Here's a related post: (Check Mark Byer's answer! the accepted answer is not correct/relevant.)
Is it safe to pipe the output of several parallel processes to one file using >>?
EDIT 2:
in the comment you said that you want to write fixed-size binary data blocks from the different processes to the same file.
Only in the case that your block size is exactly the size of the system's file-buffer size, could this work!
Make sure that your fixed block-length is exactly the system's file-buffer size. Otherwise you will get into the same situation as with the not completed lines. e.g. if you use 16k blocks, and the system uses 4k blocks, then in general you will see 4k blocks in the file in seemingly random order -- there is no guarantee that you will always see 4 blocks in a row from the same process