3

Also posted on PerlMonks.

I have this very simple Perl script on my linux server.

What I would like to be able to do is to call the script from a browser on a separate machine
Have the script initiate a fork
Have the parent send an httpResponse (freeing up the browser)
Immediately end the parent
Allow the child to do its job, heavy complex database work, which could take a minute or two
Have the child end itself with no output whatsoever

When I call this script from a browser, the browser does not receive the sent response till the child is complete.

Yes, it works when called from the command line.

Is what I want to do possible? p.s. I even tried it with ProcSimple, but I get the same hang up.

#!/usr/bin/perl
local $SIG{CHLD} = "IGNORE";
use lib '/var/www/cgi-bin';
use CGI;

my $q = new CGI;

if(!defined($pid = fork())) {
   die "Cannot fork a child: $!";
} elsif ($pid == 0) {
   print $q->header();
   print "i am the child\n";
   sleep(10);
   print "child is done\n";
   exit;
} else {
    print $q->header();
    print "I am the parent\n";
       print "parent is done\n";
   exit 0;
}
exit 0;
Dada
  • 6,313
  • 7
  • 24
  • 43
Bartender1382
  • 195
  • 1
  • 10
  • It's not clear what your question is. In the absence of a call to `wait`, the parent will not wait for the child. – William Pursell Apr 06 '22 at 22:19
  • @WilliamPursell But that's exactly what is happening. I call the script from the browser and I was expecting to see "parent is done" immediately. Meanwhile the child would finish sleeping and terminate on its own. The end goal being that I would stick all my database calls where the sleep is currently. But that is not what is happening. I call the script above from the browser and it does not return a response until the child is also done. They are somehow still connected. – Bartender1382 Apr 06 '22 at 22:35
  • It's not entirely clear how the browser is invoking the script. Since the parent and child will be writing to the same file descriptors, I would guess that the browser is waiting for all the write ends to close and is just blocking on a read. Try having the child close its stdout and stderr. (It should probably deamonize itself entirely) – William Pursell Apr 06 '22 at 22:48
  • For a simple diagnostic, you could have the parent write some data to a file before it terminates. I suspect you will see that data in the file system almost immediately. – William Pursell Apr 06 '22 at 22:49
  • @WilliamPursell While that may be so, what I need it to do is have the parent send the proper httpResponse to the browser immediately. Otherwise, I'm not getting the functionality I want. – Bartender1382 Apr 06 '22 at 23:10
  • This child might have to reopen STD* (to avoid problems with script A's parent waiting), but yes – ikegami Apr 07 '22 at 01:49
  • What do you mean by "run script from my browser"? Using a similar perl script as you, running script direct I can't avoid A ending right away leaving B running as zombie (ppid=1) on Linux. If I create wrapper.pl perl script to call your script using system("ab.pl") A ends right away as well. If I use ticks instead of system then wrapper.pl waits for B to end. Are you using Windows? Fork works differently there and maybe that's your issue. Lastly what happens if you comment out your 3 CGI related lines? – Keith Gossage Apr 07 '22 at 02:41
  • @KeithGossage The script is a Perl script residing on a linux server. I run the script from a web browser on a different machine. I tried it without the CGI lines, and it still waits. Perhaps I should close this question and ask it in Perlmonks? – Bartender1382 Apr 07 '22 at 03:00
  • Rewodred the OP – Bartender1382 Apr 07 '22 at 03:35

3 Answers3

2

In general you must detach the child process from its parent to allow the parent to exit cleanly -- otherwise the parent can't assume that it won't need to handle more input/output.

} elsif ($pid == 0) {
   close STDIN;
   close STDERR;
   close STDOUT;   # or redirect
   do_long_running_task();
   exit;

In your example, the child process is making print statements until it exits. Where do those prints go if the parent process has been killed and closed its I/O handles?

mob
  • 117,087
  • 18
  • 149
  • 283
  • My eyes are tired, I will try it in the morning. But at a quick glance it appears to work. So will just double check it, before marking this correct. But in a quick test this appears to work. – Bartender1382 Apr 07 '22 at 05:03
2

One way for a parent process to start another process that will go on its own is to "double fork." The child itself forks and it then exits right away, so its child is taken over by init and can't be a zombie.

This may help here as it does seem that there may be blocking since file descriptors are shared between parent and child, as brought up in comments. If the child were to exit quickly that may work but as you need a process for a long running job then fork twice

use warnings;
use strict;
use feature 'say';

my $pid = fork // die "Can't fork: $!";

if ($pid == 0) { 
    say "\tChild. Fork";

    my $ch_pid = fork // die "Can't fork from child: $!";

    if ($ch_pid == 0) {
        # grandchild, run the long job
        sleep 10; 
        say "\t\tgrandkid done";
        exit;
    }   

    say "\tChild, which just forked, exiting right away.";
    exit;
}

say "Parent, and done";

I am not sure how to simulate your setup to test whether this helps but since you say that the child produces "no output whatsoever" it may be enough. It should be worth trying since it's simpler than demonizing the process (which I'd expect to do the trick).

zdim
  • 64,580
  • 5
  • 52
  • 81
1

Similarly to @mob's post, here's how my web apps do it:

    # fork long task
    if (my $pid = fork) {
        # parent: return with http response to web client
    } else {
        # child: suppress further IO to ensure termination of http connection to client
        open STDOUT, '>', "/dev/null";
        open STDIN, '>', "/dev/null";
        open STDERR, '>', "/dev/null";
    }

    # Child carries on from here, 

Sometimes the (child) long process prints to a semaphore or status file that the web client may watch to see when the long process is complete.

I don't remember which Perl adept suggested this years ago, but it's served reliably in many situations, and seems very clear from the "re-visit it years later - what was I doing?" perspective...

Note that if /dev/null doesn't work outside of UNIX/Linux, then @mob's use of close might be more universal.

Bruce Van Allen
  • 159
  • 1
  • 6