Yes, there are two theoretical options. Both use async replication so will have a nonzero RPO (although from your description that seems acceptable to some extent):
Use zfs send
to create a stream on the source system, and then use some tool that can understand the contents of that stream and translate to POSIX filesystem primitives on the receiving system.
Take a snapshot on the source system and then use an FS-agnostic tool to copy stuff from that snapshot over.
The first one has the benefit of being the most performant option, because ZFS knows what parts of its pool have been changed and only has to look at / send those parts. However, I don’t know of any tool that can actually do this. (Prototypes have been built at ZFS developer hackathons, but there is not a big audience for this type of tool so they’ve never been made production quality AFAIK.)
The second one is less performant because it will have to inspect the data to see what changed, but it has the benefit that tools exist — although you may have to fight with it a little, you can use rsync
for this. Also, its RPO might be higher since transferring the data will take a bit longer. The slightly tricky parts will be:
- Writing its metadata to a writable part of the pool on the source side, since the snapshot you’re copying will be read-only. (Look in the
.zfs/
directory in the root of the filesystem you want to copy to find a readable copy of the snapshot.)
- Making the failover target not have intermediate state if the source system dies during an
rsync
run. Hopefully your target filer has the ability to snapshot before you start an rsync
run, so that you can roll back to the “last good state” if the run fails. Otherwise, hopefully your data / application can tolerate some inconsistencies. (Or maybe there’s an rsync
option that does this that I haven’t used before.)