array_fill(1, 56, array_fill(1, 36, 99));
will lead to 57 calls, while the other way round it will lead to 37 calls. The argument of these calls is a bit more massive in the latter case, so things tend to even out:
56 x 36: 1350,6 Kcalls/s
36 x 56: 1353,0 Kcalls/s
<?php
$a = microtime(true);
$k = 0;
for(;;) {
for ($i = 0; $i < 1000; $i++) {
$spreadRaw = array_fill(1, 56, array_fill(1, 36, 99));
}
$k++;
if ((($b = microtime(true)) - $a) > 10) {
break;
}
}
$t = $b-$a;
print "$k Kcalls in $t seconds";
?>
with a difference of ~.178%. Or about one hour per month of continuous operation.
The memory footprint is a bit more difficult to measure, but even if the overhead of every row were in the order of the kilobyte (which surely isn't), we'd be talking of some twenty kilobytes at the outmost.
You might want to experiment with some code coverage or profiling tools to discover where your bottlenecks really are (e.g. http://www.xdebug.org/docs/profiler ).
This answer might also be useful to you.
Update
A larger test case, switching between 1000 and 10000 rows or columns, yields
1000 rows, 10000 columns = 5773 calls/s
10000 columns, 1000 rows = 5652 calls/s
with a much larger difference of 2.14%. I haven't been able to get conclusive data about memory occupation; at this point I'm convinced that there is no difference worth speaking of.
This is only creation time though: the most important thing is to measure access time during operation, and this depends on the algorithm. If access were to be done by rows, for example, surely allocating values in rows would be more efficient; if by columns, the reverse. After all, it is to be expected that access operations will outnumber creations by several orders of magnitude.