2

I want certain directories of a unix machine (A) backed up. Due to space and security reasons, the backups should not be stored locally on machine A, but a remote machine (B). Backups can't be pushed to B via SSH or similar, but must be pulled manually via FTP or similar. Due to bandwidth and traffic limitations, only incremental backups can be used.

I am somewhat familiar with rsync and would know how to solve this issue locally or via ssh. However I think my current backup and restore processes are not optimal.

My current solution for making incremental backups without keeping the increments is as follows:

  1. Make an initial full backup and download.
  2. Make incremental backups with rsync, an increment directory as destination and the full backup as dir for --compare-dest.
  3. Pack the destination for future download in a separate file.
  4. "Merge" the increment into the full backup with mv /increment/* /full.
  5. Start at step 2 for every future incremental backup.

This way no additional space for the increments is necessary and all future backups are incremental and not differential. First Question: Is there an easier way, or can rsync do steps 2 and 4 (maybe even 3) in one?

My current solution for restoring would be iterating though all the increments, as by my understanding, due to the manual downloading, no hardlinks in the increments are possible. Second question: Is my understanding right and is there any other way that would allow restoring without iterating through all the increments?

Thanks for help!

l487804
  • 31
  • 1
  • 2

1 Answers1

0

You can use --link-dest to use the last backup to create an incremental backup. As this creates hardlinks to already existing files, each backup will be a full backup.

This is how I do it:

# obtain most recent backup
last_backup=$(ls -t "${backup_path}" | head -n1)
# create incremental backup
if [[ -n "${last_backup}" ]]; then
    echo "Create incremental backup ${new_backup} by using last backup ${last_backup}"
    rsync -av --stats --delete --link-dest="${backup_path}/${last_backup}" "${source_path}" "${backup_path}/${new_backup}"
else
    echo "Create full backup ${new_backup}"
    # create very first backup
    rsync -av --stats "${source_path}" "${backup_path}/${new_backup}"
fi

If FTP is your only option, mount it as a local path by using curlftpfs. But you need to test if the rsync command works properly as I was not able to find out if it supports hardlinks. They are important because hardlinks are set through --link-dest for already existing files to save storage space.

Note: Each incremental backup is a full backup without any dependency to previous created backups. This could be confusing as this is a different behaviour compared to other incremental backup softwares which need all previous backups to restore the data. This is possible, because if you delete a file, only the link to the file is deleted and as long a link exists, the file exists.

I created an incremental rsync script for my unraid server. Maybe it's a further inspiration.

PS If you need to keep the file owner and you suffer from "chown failed: Operation not permitted (1)" errors, you could add a volume image to your ftp server, which is mounted in an additional step. But this will probably create a massive overhead. Must be tested.

mgutt
  • 5,867
  • 2
  • 50
  • 77