7

I have a code that runs perfectly with Doctrine_Core::HYDRATION_ARRAY, but crashes with Doctrine_Core::HYDRATION_RECORD. The page is loading for about two minutes and shows standard browser error message, which is something like

Connection to the server was lost during the page load.

(I have localized browser, so that's not the exact error message, but translated).

Using mysql command line Show processlist output

+-----+--------+-----------------+--------+---------+------+-------+------------------+
| Id  | User   | Host            | db     | Command | Time | State | Info             |
+-----+--------+-----------------+--------+---------+------+-------+------------------+
| 698 | root   | localhost:53899 | NULL   | Query   |    0 | NULL  | show processlist |
| 753 | *user* | localhost:54202 | *db1*  | Sleep   |  102 |       | NULL             |
| 754 | *user* | localhost:54204 | *db2*  | Sleep   |  102 |       | NULL             |
+-----+--------+-----------------+--------+---------+------+-------+------------------+

The code itself:

 $q = Doctrine_Query::create()
        ->select("fc.*")
        ->from("Card fc")
        ->leftJoin("fc.Fact f")
        ->where("f.deckid=?", $deck_id);
  $card = $q->execute(array(), Doctrine_Core::HYDRATE_RECORD);
  //Commenting the above line and uncommenting below line leads to an error
  //$card= $q->execute(array(), Doctrine_Core::HYDRATE_ARRAY);

So I think that query is not populated with correct SQL. Hovewer, $q->getSqlQuery() outputs the correct SQL that runs perfectly if executed via command-line or phpMyAdmin.

Server configuration:

Apache/2.2.4 (Win32) mod_ssl/2.2.4 OpenSSL/0.9.8k mod_wsgi/3.3 Python/2.7.1 PHP/5.2.12
Mysql 5.1.40-community

Everything runs on localhost, so that's not a connection issue.

The amount of data for that specific query is very small - about a dozen records, so it's nothing to do with memory or time limits. safe_mode is off, display_errors is on ,error_reporting is 6135.

Could somebody point to some hints or caveats I'm missing?

UPDATE: what's most weird that it works with HYDRATION_RECORD from time to time.

UPDATE2: it crashes when I'm trying to fetch something from the query, e.g. getFirst(). With no fetching it works, but I really don't need a query form which I can't fetch data.

UPDATE3: I've workarounded this issue, but I'm still interested, what's going on.

Update 4:

Sql query:

SELECT f.id AS f__id, f.createdat AS f__createdat, f.updatedat AS f__updatedat,
    f.flashcardmodelid AS f__flashcardmodelid, f.source AS f__source, 
    f.content AS f__content, f.md5 AS f__md5 
FROM flashcard f 
LEFT JOIN fact f2 ON f.id = f2.flashcardid AND (f2.deleted_at IS NULL) 
WHERE (f2.deckid = 19413)

Output:

f__id   f__createdat            f__updatedat            f__flashcardmodelid     f__source           f__content
245639  2011-08-05 20:00:00     2011-08-05 20:00:00     179                     jpod lesson 261     {"source":"\u7f8e\u5473\u3057\u3044","target":"del... 

So, the query itself is OK, data fectched as expected. Do you need models definition?

Update 5 When running query with HYDRATE_RECORD httpd.exe consumes 100% of one of the CPU cores.

Final Update Don't know why, but now it works... Haven't changed anything. Looks like it just was waiting when I place a bounty on this question. :) But still, as I already have placed a bounty, any idea of what's the difference between HYDRATE_ARRAY and HYDRATE_RECORD that might crash the script is appreciated.

J0HN
  • 26,063
  • 5
  • 54
  • 85
  • Can you show some sample output of the generated query, when run through the command line or phpMyAdmin? Just the column names and a row of data would be fine. Also, are there any clues in your MySQL or Apache error logs? (I wonder if something is perhaps running out of memory when hydrating records, but not arrays, which will be smaller...) – Matt Gibson Aug 18 '11 at 08:53
  • No, I've checked every single log. :) Also, that's on dev machine with all error reporting enabled (I'm sure about that). I'll run that query and update the post in a few minutes. – J0HN Aug 18 '11 at 08:56
  • As an aside, what's the point in doing a LEFT JOIN to `fact` when you're then doing a `WHERE fact.deckid = `? Isn't that just an INNER JOIN, really? (As a row would have to exist in `fact` for `deckid` to have a value...) – Matt Gibson Aug 18 '11 at 09:57
  • Well, right, but that does solves the problem. The problem is that it works as is with `HYDRATE_ARRAY` and do not work with `HYDRATE_RECORD` – J0HN Aug 18 '11 at 10:05

2 Answers2

1

I've seen a similar behavior when dumping in some way (print_r, var_dump, and so on) the whole record set or just a single record. This is caused by the fact that Doctrine uses an high structured class hierarchy which contains a lot of circular references. This obviously is not true when you use Doctrine_Core::HYDRATION_ARRAY.

So any of the mentioned functions (but I think that can be some other way to reproduce that) will begin an endless loop which causes 100% cpu usage until it reaches a kill point.

Don't know if this can help in your case.

Fabio
  • 18,856
  • 9
  • 82
  • 114
  • Thanks for the answer, but I'm aware of this. No recursively printing functions used. Also, this "bug" is caused by circular references in Doctrine objects, and I have `XDebug` installed, with max depth set to 3. So, when I accidentally dump the Doctrine object, I was able to see the outputs. And in this case, I had a 100% CPU load, no output and, eventually, dropped connection. – J0HN Aug 26 '11 at 12:48
0

I have had a similar issue with Doctrine 1.2, and found out that PHP reported a FATAL error due to exceeding memory limit or execution time, or sometimes the situation even got PHP to cause a segmentation fault.

You can find these errors in your Apache error log file. On my OS X box those files are in /var/log/apache2/error_log. You can increase the allowed memory or max execution time in your PHP configuration.

In my case, it was caused by the amount of records that was fetched from the database that caused excessive memory consumption. Hydrating Doctrine_Records seems to be a relatively tough thing to do some times.

Just out of curiousity, how many rows do you expect in your result?

Pelle
  • 6,423
  • 4
  • 33
  • 50
  • That records outputs just about a dozen of records. I know how to increase memory limit, actually it's set to 512Mb now, so I doubt it's something about that. I've checked both PHP error logs and Apache error logs before posting this message, no clues in it. I have error reporting enabled, so I would see the error message if any. So, thank you for the answer, but it's not correct :) – J0HN Aug 18 '11 at 09:57