I do ZFS remote replication from a master host to a slave host, where I have a Perl script that runs on the master host.
For each filesystem it ssh to the remote host and start mbuffer in listening mode and then the script continues and then send the data. On success mbuffer should exit by itself.
The problem
It was quite difficult to start mbuffer on the remote host over ssh and then be able to continue in the script. I ended up doing what you can see below.
The problem is that until the script exits it leaves behind <defunct>
processes one for each filesystem.
Question
It is possible to avoid having the <defunct>
processes?
sub mbuffer {
my ($id, $zfsPath) = @_;
my $m = join(' ', $mbuffer, '-I', $::c{port});
my $z = join(' ', $zfs, 'receive', , $zfsPath);
my $c = shellQuote($ssh, $::c{slaves}{$id}, join('|', $m, $z));
my $pm = Parallel::ForkManager->new(1);
my $pid = $pm->start;
if (!$pid) {
no warnings; # fixes "exec" not working
exec($c);
$pm->finish;
}
sleep 3; # wait for mbuffer to listen
return $pid;
}