The command you would use might work but would have some performance issues that I would want to avoid.
- the "fswatch" would generate output on each modification of the FS (ex. every file update.
- the "rsync" would each time check recursively all possible changes in the directory and it's sub directories and files. ( not counting the actual data copy, only this operation takes long time once there are a large number of files&dirs in the source and destination)
This would mean that for each line outputted by "fswatch" there would be one "rsync" instance started, while the duration of "rsync" would be larger and larger.
48 hours is a lot of time and copying the files (~100GB) wouldn't take so long anyway (disk to disk is very fast, over gigabit network is also very fast).
Instead I would propose an execution rsync -a --delete /source /destination
at regular intervals (ex. 30 minutes) during the generation process and once at the end, to be sure nothing is missed. A short script could contain:
#!/bin/bash
while ps -ef | grep -q "process that generates files"; do
echo "Running rsync..."
rsync -a --delete /source /destination
echo "...waiting 30 minutes"
sleep 1800 # seconds
done
echo "Running final rsync..."
rsync -a --delete /source /destination
echo "...done."
...just replace the "process that generates files" with whatever name the process that generates files looks like in the "ps -ef" output while is it running. Adjust time as you see fit, I considered that in 30 minutes ~2GB of data are created which can be copied in a couple of minutes.
The script would ensure that "rsync" doesn't run more times then it should and it would focus into copying files instead of comparing the source and destination to often.
The option "-a" (archive) would imply the options you use and more (-rlptgoD), the "--delete" would remove any file that exists on "/destination" but doesn't exist on "/source" (handy in case of temporary files that were copied but not actually needed in the final structure).