I have a Laravel Command that periodically uploads some file to a remote S3 disk (DigitalOcean Spaces, S3 compatible and uses the Laravel/Flysystem S3 driver).
The problem is that the command crashes after moving around 100 files because the server runs out of resources/file pointers. When I dump out the resources I can see that the number of resources increase by about 5 for each file/each iteration and the server runs out of available file pointers, and PHP can't open more resources.
Setting a high limit with ulimit -n 999999
helps, but that is not really solving the problem.
<?php
$localDisk = Storage::disk('some_local_disk');
$localFiles = $localDisk->allFiles();
$localBaseDir = $localDisk->path('');
$remoteDisk = Storage::disk('some_remote_disk'); // DigitalOcean Spaces (S3 compatible)
$remoteBaseDir = 'some/folder';
// Check resources before we start
dump(get_resources('stream'), count(get_resources('stream')));
foreach ($localFiles as $file) {
// Skip dotfiles
if (Str::startsWith($file, '.')) {
continue;
}
$resourcesOpened = count(get_resources('stream'));
$localLocation = $localBaseDir.$file;
$remoteLocation = $remoteBaseDir.$file;
$fileHandle = fopen($localLocation, 'ab+');
$remoteDisk->put($remoteLocation, $fileHandle); // Guess this is creating resources, but why aren't they closed? How can I close them?
$localDisk->delete($file); // Thought this would be enough
fclose($fileHandle);
// Check resources after each iteration
dump('___________________', get_resources('stream'), count(get_resources('stream')));
usleep(5000); // Pause a little to reduce the load
}
How can I close the resources that are opened up? Is there a better way to do this? I was thinking about maybe using an S3 CLI client, but I would like to avoid that.