I benchmarked these 3 options (the other benchmark wasn't splitting CPU cycles and was 4y old):
class foo {
public static function bar() {
return __METHOD__;
}
}
function directCall() {
return foo::bar($_SERVER['REQUEST_TIME']);
}
function variableCall() {
return call_user_func(array('foo', 'bar'), $_SERVER['REQUEST_TIME']);
}
function reflectedCall() {
return (new ReflectionMethod('foo', 'bar'))->invoke(null, $_SERVER['REQUEST_TIME']);
}
The absolute time taken for 1,000,000 iterations:
print_r(Benchmark(array('directCall', 'variableCall',
'reflectedCall'), 1000000));
Array
(
[directCall] => 4.13348770
[variableCall] => 6.82747173
[reflectedCall] => 8.67534351
)
And the relative time, also with 1,000,000 iterations (separate run):
ph()->Dump(Benchmark(array('directCall', 'variableCall',
'reflectedCall'), 1000000, true));
Array
(
[directCall] => 1.00000000
[variableCall] => 1.67164707
[reflectedCall] => 2.13174915
)
It seems that the reflection performance was greatly increased in 5.4.7 (from ~500% down to ~213%).
Here's the Benchmark()
function I used if anyone wants to re-run this benchmark:
function Benchmark($callbacks, $iterations = 100, $relative = false)
{
set_time_limit(0);
if (count($callbacks = array_filter((array) $callbacks, 'is_callable')) > 0)
{
$result = array_fill_keys($callbacks, 0);
$arguments = array_slice(func_get_args(), 3);
for ($i = 0; $i < $iterations; ++$i)
{
foreach ($result as $key => $value)
{
$value = microtime(true);
call_user_func_array($key, $arguments);
$result[$key] += microtime(true) - $value;
}
}
asort($result, SORT_NUMERIC);
foreach (array_reverse($result) as $key => $value)
{
if ($relative === true)
{
$value /= reset($result);
}
$result[$key] = number_format($value, 8, '.', '');
}
return $result;
}
return false;
}