1

I'm am dumping a mysql wordpress database everyday as a backup. Since i don't want to end up with 365 .sql files after a year, i figured it would be decent only to keep the last 30 days of dump files. Always keep the last 30 and automatically delete the older ones, one a day.

I am looking to program this in bash as part of a cron job. So i already have the part where i dump and send the file to the backup server. I only need to add the counting and deleting the oldest one each day snippet.

Here is what i got (the username and pswd are kept in a .my.cnf file):

now=$(date +'%m-%d-%y')
mysqldump -h mysql.server.com my_database | gzip -9 > ${home}/dbBackups/db_backup.sql.gz
mv ${home}/dbBackups/db_backup.sql.gz ${home}/dbBackups/${now}_db_backup.sql.gz
scp ${home}/dbBackups/${now}_db_backup.sql.gz backup_user@backup.server.com:/home/backup_user/backup.server.com/dbBackups/

Does anyone have an idea on how to implement this functionality?

Jeanmichel Cote
  • 531
  • 1
  • 5
  • 19
  • 1
    your cron task runs on your machine or the remote one? you should have it run on the remote one. otherwise it would be too easy. if you are not the admin of the remote one, a second best solution would be having exact backups local, then use `rsync` with `--delete` option to delete remote backups. – Jason Hu Jul 01 '15 at 03:06
  • 1
    You could also create a `logrotate` definition for the file/folder. See: `man 8 logrotate` – David C. Rankin Jul 01 '15 at 04:15
  • Yeah, both of you guys offer very valuable information. @HuStmpHrrr, i'll definitely try out this option first as it looks like the simplest one and requires less configuration, compared to logrotate, in my case. – Jeanmichel Cote Jul 01 '15 at 11:52
  • @DavidC.Rankin, Thanks for the tip, i never used logrotate before and will definitely dig into that for this or other project. – Jeanmichel Cote Jul 01 '15 at 11:53

3 Answers3

7

The standard command to print files older than 30 days are

find <full_path_to_your_dir> -type f -mtime +30 -print 

The standard command to delete files older than 30 days are

find <full_path_to_your_dir> -type f -mtime +30 -delete

The above command will delete all files older than 30 days.

Biswajit_86
  • 3,661
  • 2
  • 22
  • 36
  • This will be part 1 of my 2 parts solution. As is, it looks like it will do the job. Thanks! – Jeanmichel Cote Jul 01 '15 at 11:56
  • The question that came up with this line is this: Since it will run on a cron job without prompting, what happens during the first 30 days where the `find` command won't find anything to delete? Will it just skip to the next command? Will it freeze the script? – Jeanmichel Cote Jul 01 '15 at 12:19
  • 1
    If it doesn't find any files older than 30 days it won't do anything. It will continue to the next command. You don't have to worry about it hanging the script waiting for the find command to do something. – Cody Stevens Jul 01 '15 at 13:38
  • Hi @JimiSpire. could you please "accept" my answer if it solves your problem – Biswajit_86 Jul 01 '15 at 15:50
1

The find command as mentioned above is the easiest/cleanest solution. If you want you can also do

old=$(date -d "30 days ago"  +'%m-%d-%y') 
rm ${home}/dbBackups/$"{old}"_db_backup.sql.gz

You will want to make sure that there is no way to screw up your paths. In fact ${home} is dangerously close to the env var $HOME so you may consider changing it. You could also cron a simple script like that to run daily to remove files from wherever you are scp'ing them.

Cody Stevens
  • 424
  • 4
  • 9
  • +1 for the tip about the $home var, although on my script, i declare the path just above the $now var, which i haven't shown here. – Jeanmichel Cote Jul 01 '15 at 11:58
0

You all have already been extra helpful. Thank you.

So the version 1 script that I will try will look like that:

homePath="/home/myuser"
now=$(date +'%m-%d-%y')
mysqldump -h mysql.server.com my_database | gzip -9 > ${homePath}/dbBackups/db_backup.sql.gz
mv ${homePath}/dbBackups/db_backup.sql.gz ${homePath}/dbBackups/${now}_db_backup.sql.gz
find ${homePath}/dbBackups/ -type f -mtime +30 -delete
rsync -e ssh backup_user@backup.server.com:/home/backup_user/backup.server.com/dbBackups/ ${homePath}/dbBackups/

Simple enough. Does that sound right to you?

As the version 1 didn't quite work, after minimal fiddling, here is the working script:

homePath="/home/myuser"
now=$(date +'%m-%d-%y')

mysqldump -h mysql.server.com my_database | gzip -9 > ${homePath}/dbBackups/db_backup.sql.gz
mv ${homePath}/dbBackups/db_backup.sql.gz ${homePath}/dbBackups/${now}_db_backup.sql.gz
find ${homePath}/dbBackups/ -type f -mtime +30 -delete
rsync -a --log-file=${homePath}/rsync.log  ${homePath}/dbBackups/ backup_user@backup.server.com:/home/backup_user/backup.server.com/dbBackups/
Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
Jeanmichel Cote
  • 531
  • 1
  • 5
  • 19
  • The script didn't work, it was `skipping directory .`. Probably because i inversed `source` and `target` in my rsync command. Also, looks like we can skip the `-e ssh` part as ssh is the main protocol used to sync with remote addresses. – Jeanmichel Cote Jul 01 '15 at 12:42
  • 1
    I'd recommend setting `now` to `date +%F` (equivalent to %Y-%m-%d) so that your backups order sensibly without using ls -t. – Mike Partridge Jan 25 '16 at 15:35