You might not even like rsync. Yeah it’s old. Yeah it’s slow. But if you’re working with Linux you’re going to need to know it.
In this video I walk through my favorite everyday flags for rsync.
Support the channel:
https://patreon.com/VeronicaExplains
https://ko-fi.com/VeronicaExplains
https://thestopbits.bandcamp.com/
Here’s a companion blog post, where I cover a bit more detail: https://vkc.sh/everyday-rsync
Also, @BreadOnPenguins made an awesome rsync video and you should check it out: https://www.youtube.com/watch?v=eifQI5uD6VQ
Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it’s still pretty good: https://www.youtube.com/watch?v=3FKsdbjzBcc
Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica… why not use (insert shiny object here)
I would generally argue that rsync is not a backup solution. But it is one of the best transfer/archiving solutions.
Yes, it is INCREDIBLY powerful and is often 90% of what people actually want/need. But to be an actual backup solution you still need infrastructure around that. Bare minimum is a crontab. But if you are actually backing something up (not just copying it to a local directory) then you need some logging/retry logic on top of that.
At which point you are building your own borg, as it were. Which, to be clear, is a great thing to do. But… backups are incredibly important and it is very much important to understand what a backup actually needs to be.
Borg gang represent!
Yeah, if you want to use rsync specifically for backups, you’re probably better-off using something like
rdiff-backup
, which makes use of rsync to generate backups and store them efficiently, and drive it from something likebackupninja
, which will run the task periodically and notify you if it fails.rsync
: one-way synchronizationunison
: bidirectional synchronizationgit
: synchronization of text files with good interactive merging.rdiff-backup
:rsync
-based backups. I used to use this and moved torestic
, as thebackupninja
target forrdiff-backup
has kind of fallen into disrepair.That doesn’t mean “don’t use
rsync
”. I mean,rsync
’s a fine tool. It’s just…not really a backup program on its own.Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.
(OTOH, rsync is still wonderful for large transfers.)
I think that you may be thinking of
rsnapshot
rather thanrdiff-backup
which has that behavior; both usersync
.But I’m not sure why you’d be concerned about this behavior.
Are you worried about inode exhaustion on the destination filesystem?
Huh, I think you’re right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.
Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
Don’t understand the downvotes. This is the type of lesson people have learned from losing data and no sense in learning it the hard way yourself.
How would you pin down something like this? If it happened to me, I expect I just wouldn’t understand what’s going on.
I originally thought it was one of my drives in my RAID1 array that was failing, but I noticed copying data was yielding btrfs corruption errors on both drives that could not be fixed with a scrub and I was also getting btrfs corruption errors on the root volume as well. I figured it would be quite an odd coincidence if my main SSD and 2 hard disks all went bad and I happened upon an article talking about how corrupt data can also occur if the RAM is bad. I also ran SMART tests and everything came back with a clean bill of health. So, I installed and booted into Memtester86+ and it immediately started showing errors on the single 16Gi stick I was using. I happened to have a spare stick that was a different brand, and that one passed the memory test with flying colors. After that, all the corruption errors went away and everything has been working perfectly ever since.
I will also say that legacy file systems like ext4 with no checksums wouldn’t even complain about corrupt data. I originally had ext4 on my main drive and at one point thought my OS install went bad, so I reinstalled with btrfs on top of LUKS and saw I was getting corruption errors on the main drive at that point, so it occurred to me that 3 different drives could not have possibly had a hardware failure and something else must be going on. I was also previously using ext4 and mdadm for my RAID1 and migrated it to btrfs a while back. I was previously noticing as far back as a year ago that certain installers, etc. that previously worked no longer worked, which happened infrequently and didn’t really register with me as a potential hardware problem at the time, but I think the RAM was actually progressively going bad for quite a while. btrfs with regular scrubs would’ve made it abundantly clear much sooner that I had files getting corrupted and that something was wrong.
So, I’m quite convinced at this point that RAID is not a backup, even with the abilities of btrfs to self-heal, and simply copying data elsewhere is not a backup, because something like bad RAM in both cases can destroy data during the copying process, whereas older snapshots in the cloud will survive such a hardware failure. Older data backed up that wasn’t coped with faulty RAM may be fine as well, but you’re taking a chance that a recent update may overwrite good data with bad data. I was previously using Rclone for most backups while testing Restic with daily, weekly, and monthly snapshots for a small subset of important data the last few months. After finding some data that was only recoverable in a previous Restic snapshot, I’ve since switched to using Restic exclusively for anything important enough for cloud backups. I was mainly concerned about the space requirements of keeping historical snapshots, and I’m still working on tweaking retention policies and taking separate snapshots of different directories with different retention policies according risk tolerance for each directory I’m backing up. For some things, I think even btrfs local snapshots would suffice with the understanding that it’s to reduce recovery time, but isn’t really a backup . However, any irreplaceable data really needs monthly Restic snapshots in the cloud. I suppose if don’t have something like btrfs scrubs to alert you that you have a problem, even snapshots from months ago may have an unnoticed problem.
+1 for rdiff-backup. Been using it for 20 years or so, and I love it.
I use rsync and a pruning script in crontab on my NFS mounts. I’ve tested it numerous times breaking containers and restoring them from backup. It works great for me at home because I don’t need anything older than 4 monthly, 4 weekly, and 7 daily backups.
However, in my job I prefer something like bacula. The extra features and granularity of restore options makes a world of difference when someone calls because they deleted prod files.
I don’t know if there’s a term for them, but Bacula (and I think AMANDA might fall into this camp, but I haven’t looked at it in ages) are oriented more towards…“institutional” backup. Like, there’s a dedicated backup server, maybe dedicated offline media like tapes, the backup server needs to drive the backup, etc).
There are some things that
rsnapshot
,rdiff-backup
,duplicity
, and so forth won’t do.At least some of them (
rdiff-backup
, for one) won’t dedup files with different names. If a file is unchanged, it won’t use extra storage, but it won’t identify different identical files at different locations. This usually isn’t all that important for a single host, other than maybe if you rename files, but if you’re backing up many different hosts, as in an institutional setting, they likely files in common. They aren’t intended to back up multiple hosts to a single, shared repository.Pull-only. I think that it might be possible to run some of the above three in “pull” mode, where the backup server connects and gets the backup, but where they don’t have the ability to write to the backup server. This may be desirable if you’re concerned about a host being compromised, but not the backup server, since it means that an attacker can’t go dick with your backups. Think of those cybercriminals who encrypt data at a company and wipe other copies and then demand a ransom for an unlock key. But the “institutional” backup systems are going to be aimed at having the backup server drive all this, and have the backup server have access to log into the individual hosts and pull the backups over.
Dedup for non-identical files. Note that
restic
can do this. While files might not be identical, they might share some common elements, and one might want to try to take advantage of that in backup storage.rdiff-backup
andrsnapshot
don’t do encryption (thoughduplicity
does). If one intends to use storage not under one’s physical control (e.g. “cloud backup”), this might be a concern.No “full” backups. Some backup programs follow a scheme where one periodically does a backup that stores a full copy of the data, and then stores “incremental” backups from the last full backup. All
rsnapshot
,rdiff-backup
, andduplicity
are always-incremental, and are aimed at storing their backups on a single destination filesystem. A split between “full” and “incremental” is probably something you want if you’re using, say, tape storage and having backups that span multiple tapes, since it controls how many pieces of media you have to dig up to perform a restore.I don’t know how Bacula or AMANDA handle it, if at all, but if you have a DBMS like PostgreSQL or MySQL or the like, it may be constantly receiving writes. This means that you can’t get an atomic snapshot of the database, which is critical if you want to be reliably backing up the storage. I don’t know what the convention is here, but I’d guess either using filesystem-level atomic snapshot support (e.g.
btrfs
) or requiring the backup system to be aware of the DBMS and instructing it to suspend modification while it does the backup.rsnapshot
,rdiff-backup
, andduplicity
aren’t going to do anything like that.I’d agree that using the more-heavyweight, “institutional” backup programs can make sense for some use cases, like if you’re backing up many workstations or something.