0

I have a very weird case as follows :

This my CURL function :

function get_data($url) {
    $ch = curl_init();
    $timeout = 500;
    curl_setopt($ch, CURLOPT_URL, $url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
    $data = curl_exec($ch);
    curl_close($ch);
    return $data;
}

I have a txt file content link line by line. I get all that link to an array and tried to get content :

$itemLink = file($LinkFile);
if(empty($itemLink)){
    echo "endFile";
    exit();
}
echo $itemLink[0]; //https://stackoverflow.com
echo get_data($itemLink[0]);

The result return empty but when i tried put direct link to my function like this :

echo get_data('https://stackoverflow.com');

Of course i can get full page normally.

Anybody knows what going on ?

Muhammad Omer Aslam
  • 22,976
  • 9
  • 42
  • 68

2 Answers2

1

Because you're using file() to read your document, I have a feeling you're getting the line endings in your URL and cURL is failing to handle the request.

http://php.net/manual/en/function.file.php

Each line in the resulting array will include the line ending, unless FILE_IGNORE_NEW_LINES is used.

to remove any superfluous line ending characters from the URL.

$itemLink = file( $LinkFile, FILE_IGNORE_NEW_LINES );

If this is not sufficient use the trim() function

echo get_data( trim( $itemLink[0] ) );

or

curl_setopt( $ch, CURLOPT_URL, trim( $url ) );
Scuzzy
  • 12,186
  • 1
  • 46
  • 46
-1

maybe a http > https thing?

Previously answered

curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);

source: How to use PHP CURL to bypass cross domain

Ramakay
  • 74
  • 6