0

I need to determine whether the bottleneck in a system's performance is on my end or a client's end. The gist of my proposed solution is this:

<?php

// Buffer the output so nothing is sent to the client
ob_start();

// Run the application
pre_execution_setup();
execute_application();

// Get time after execution is complete (POINT A)
$executionTime = microtime(true);

// Send the output
ob_end_flush();

// Calculate the time since POINT A
$transportTime = microtime(true) - $executionTime;

// Calculate time between POINT A and the time the request started processing
$executionTime -= $_SERVER['REQUEST_TIME_FLOAT];

Is there any reason this would not provide reliable results? I believe the bottleneck is in the client's network at one location since their other locations have no performance issues, but I do have to prove it. If I am correct, these two time measurements would be very different for slow calls (short execution time, long transport time). So far, that's what I've seen in the results. However, I'm not 100% sure that the results are reliable.

landon
  • 31
  • 2
  • See https://stackoverflow.com/questions/55885892/can-we-use-serverrequest-time-float-to-get-a-reliable-process-time – Alan Jan 11 '22 at 20:47
  • That doesn't quite address it as my concern is about confirming that the output buffer flush time is representative of the time it takes for a client to receive the output. If the buffer flush just sent the data into some kind of apache cache, that wouldn't be helpful as it would be measuring the time it took to send the data to apache, rather than the client. – landon Jan 11 '22 at 20:50

0 Answers0