I'm using cURL to download large XML files (between 500MB and 1GB) from a remote server. Although the script works fine for smaller test files, every time I try to download a file larger than a few hundred megabytes, the script seems to hang - it doesn't quit, there's no error message, it just hangs there.
I'm executing the script from the command line (CLI), so PHP itself should not time out. I have also tried cURL's verbose mode, but this shows nothing beyond the initial connection. Every time I download the file, it stops at exactly the same size (463.3MB). The file's XML at this point is incomplete.
Any ideas much appreciated.
$ch = curl_init();
$fh = fopen($filename, 'w');
curl_setopt($ch,CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE );
curl_setopt($ch, CURLOPT_FILE, $fh);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 0);
if(curl_exec($ch) === false)
{
echo 'Curl error: ' . curl_error($ch) . "\n";
}
else
{
echo 'Operation completed without any errors';
}
$response = array(
'header' => curl_getinfo($ch)
);
curl_close($ch);
fclose($fh);
if($response['header']['http_code'] == 200) {
echo "File downloaded and saved as " . $filename . "\n";
}
Again, this script works fine with smaller files, but with the large file I try to download it does not even get as far as printing out an error message.
Could this be something else (Ubuntu 10.04 on Linode) terminating the script? As far as I understand, my webserver shouldn't matter here since I am running it through CLI.
Thanks,
Matt