1

I'm using the Symfony2 Process component to manually manage a pool of processes.

In the example below I restart 2 simple processes every 2 seconds and monitor what happens. The application breaks after restarting these processes a few hundred times.

Execution is stopped and I get the following PHP warning:

proc_open(): unable to create pipe Too many open files

and then the following exception is thrown by the Symfony Process component:

[Symfony\Component\Process\Exception\RuntimeException]  
Unable to launch a new process.   

I've manually monitored the total number of open processes and it never goes up the expected limit.

The below simplified snippet is part of a Symfony2 command and is being runned from the CLI (e.g. app/console hamster:run):

    $processes[] = new Process("ls > /dev/null", null, null, null, 2);
    $processes[] = new Process("date > /dev/null", null, null, null, 2);

    while (count($processes) > 0) {
        foreach ($processes as $i => $process) {
            if (!$process->isStarted()) {
                $process->start();

                continue;
            }

            try {
                $process->checkTimeout();
            } catch (\Exception $e) {
                // Don't stop main thread execution
            }

            if (!$process->isRunning()) {
                // All processes are timed out after 2 seconds and restarted afterwards
                $process->restart();
            }
        }

        usleep($sleep * 1000000);
    }

This application is being run on a MAC server running OS X 10.8.4.

I would appreciate any hints on how to pursue the root of this issue.

Update #1: I've simplified my function to work with basic commands like ls and date for faster testing. It still looks like the Process command fails after starting and stopping about 1000-1500 processes.

I suspected that proc_close() was not being called correctly for each process, but further investigation revealed that's not the case here.

luchaninov
  • 6,792
  • 6
  • 60
  • 75
ukliviu
  • 3,136
  • 1
  • 24
  • 30

1 Answers1

3

The file handles are not being garbage collected, so they eventually fill up (os limit or php limit, not sure), but you can fix it by adding an explicit garbage collection call:

gc_collect_cycles();
usleep($sleep * 1000000);

Also, be forwarned that garbage collection doesn't work very well inside a foreach loop because of the way that php maps the temporary $foo as $bar => $var array variables into memory. If you need it in that part of the code you could switch it to something like this instead, which I think should allow for garbage collection inside the for loop:

$processes[] = new Process("ls > /dev/null", null, null, null, 2);
$processes[] = new Process("date > /dev/null", null, null, null, 2);

$sleep = 0;

do {
    $count = count($processes);
    for($i = 0; $i < $count; $i++) {
        if (!$processes[$i]->isStarted()) {
            $processes[$i]->start();

            continue;
        }

        try {
            $processes[$i]->checkTimeout();
        } catch (\Exception $e) {
            // Don't stop main thread execution
        }

        if (!$processes[$i]->isRunning()) {
            // All processes are timed out after 2 seconds and restarted afterwards
            $processes[$i]->restart();
        }

        gc_collect_cycles();
    }

    usleep($sleep * 1000000);
} while ($count > 0);
cbednarski
  • 11,718
  • 4
  • 26
  • 33
  • Good point. I didn't consider the necessity of manually calling the garbage collector. Performing some additional benchmarks (restarted the processes 100,000+ times) with the `gc_collect_cycles()` confirmed your solution. Thanks! – ukliviu Sep 12 '13 at 11:23
  • a better way would be to `->stop(0)` the process instance when done, this would effectively close the open file descriptors without having to rely on `__destruct` being called by the garbage collector. – Florian Klein Sep 28 '17 at 14:12
  • You saved my life with this! Is the sleep necessary? – StockBreak Mar 03 '20 at 17:04