2

I must have read all the similar posts here but still cant figure out why this is happening.

My code:

$stmt = $this->db->stmt_init();
        $stmt->prepare("SELECT Table1.id,Name,type,text,fname,lname FROM Table1, Table2 WHERE Table1.email = Table2.email AND type='text' AND Table1.Id=?");
        $stmt->bind_param("i", $id);
        $stmt->bind_result($legTxtId,$legTxtName, $legTxtType, $legTxtText, $legTxtFname, $legTxtLname);
        $stmt->execute();
        $results = array();
        while($stmt->fetch())
        {
            $results[] = array(
                    'legTxtId' => $legTxtId , 'legTxtName' => $legTxtName , 'legTxtType' => $legTxtType , 
                    'legTxtText' => $legTxtText , 'legTxtFname' => $legTxtFname ,
                    'legTxtLname' => $legTxtLname );
        }
        $stmt->close();
        return $results;

Now I am using the exact same code in a different function that is called before this one and it works fine even though it returns one more field.

This one in particular only returns 1 row with just plain short text (no photos or anything) so it should not fail cause its definitely less than 64M.

Can anyone see what the problem is?

mixkat
  • 3,883
  • 10
  • 40
  • 58
  • What are the number of records in your `Table 1` and `Table 2` ? – verisimilitude Aug 11 '12 at 14:18
  • Both queries should only return one row atm..and the first one does this doesnt work at all..even if I remove the first one this fails and I know that the query is valid so.. – mixkat Aug 11 '12 at 14:20
  • 2
    Are any of these fields `MEDIUMTEXT`, `MEDIUMBLOB` or similar - capable of storing huge amounts of data? A similar question was [posted recently](http://stackoverflow.com/questions/11913818/php-massive-memory-usage-for-sql-query), you might have to switch to PDO or enable mysqlnd to make that work. – DCoder Aug 11 '12 at 14:22
  • 1
    You did'nt get my point. This query opens up two instances of the tables in-memory and will match each (Table 1) row's email with the corresponding entry in the other table. That's why I asked you the table size. If both tables contain very large amounts of data, you may resort to `mysqlnd` or look up some efficient/optimized data retrieval techniques. – verisimilitude Aug 11 '12 at 14:24
  • @DCoder Yes..one field is in fact LONGTEXT...Can you elaborate a bit on the PDO and mysqlInd suggestions?Thanks – mixkat Aug 11 '12 at 14:27
  • @verisimilitude Oh...Table2 only has 3 text fields limited to 15chars each and Table1 does contain a LONGTEXT field and a BLOB field..So same as above..Could you maybe elaborate a bit on mysqlnd? – mixkat Aug 11 '12 at 14:30
  • 1
    http://dev.mysql.com/downloads/connector/php-mysqlnd/. I guess the link provided by @DCoder above may be of some help to you. Please go through it. – verisimilitude Aug 11 '12 at 14:32
  • As the linked question comments say, it appears that mysqli uses a rather inefficient method of allocating memory to store the returned variable value - "let's just allocate the maximum amount of memory this column can contain, to be safe". Based on some ad-hoc testing, it seems that enabling the mysqlnd driver or switching to PDO would solve this problem - their code is smarter about memory allocation. – DCoder Aug 11 '12 at 14:32
  • Please can you post the exact size that PHP is trying to allocate and failing on? I wouldn't be surprised if it was 50331646 bytes. – Danack Aug 11 '12 at 14:56
  • @DCoder Ok I feel a bit stupid now...:( How can I enable mysqlnd? – mixkat Aug 11 '12 at 14:57
  • [That depends on your OS and PHP version](http://us.php.net/manual/en/mysqlnd.install.php). It might be difficult if you're on shared hosting you don't control. – DCoder Aug 11 '12 at 14:59
  • @Danack57 The exact size is: 4294967296 – mixkat Aug 11 '12 at 15:16
  • @DCoder Put up an answer if you like so that I can accept it and you get your rep :) – mixkat Aug 11 '12 at 15:17
  • mixkat - yeah also not too surprising. 4294967296 = 0x100000000. i.e. it's allocating the max possible size for the longtext result, which is dumb. – Danack Aug 11 '12 at 15:48

3 Answers3

1

As discussed in the other question it seems the two solutions are:

1) Switch to the mysqlnd connector as this doesn't show the same bug.

If you're using using Yum to install PHP (e.g. on an Amazon ec2 server) then you can achieve that by changing your setup of your LAMP stack from this:

sudo yum install php-mysql php php-xml php-mcrypt php-mbstring php-cli mysql httpd 

to:

sudo yum install php-mysqlnd php php-xml php-mcrypt php-mbstring php-cli mysql httpd 

2) Use either store_result or use_result which also don't show the massive memory allocation issue.

Switching to mysqlnd is probably a better long term solution as it is general better written than the existing php-mysql connector (e.g. results aren't duplicated in MySQL memory before being copied to PHP memory) and is the default connector from PHP 5.4.0 onwards.

Danack
  • 24,939
  • 16
  • 90
  • 122
0

Then why you use while loop for one record?

Gundars Mēness
  • 488
  • 4
  • 17
0

Moving the data to a local buffer will maybe help:

// ...
$stmt->prepare(...);
$stmt->bind_param(...);

$stmt->execute();
$stmt->store_result();

$stmt->bind_result(...);
// ...
Niko
  • 26,516
  • 9
  • 93
  • 110