I found this question while looking for ways to measure just request-response time in axios
....but I thought it was interesting enough to have an alternative answer to the core question.
If you really want to know the network latency the technique used by the Precision Time Protocol could be some inspiration.
The concept
This drawing hopefully explains what I mean by "network latency":
API Request API Response
| ^
v |
UI ---+--------------------------+-----------> Time
A \ ^ B
\ /
\ /
\ /
v /
Backend -----+---------------+-------------> Time
a | ^ b
| |
+- processing --+
time
Where:
- A is the time when the UI sends the request
- a is the time when the backend receives the request
- b is the time when the backend sends the response
- B is the time when the UI receives the response
The time it takes from A->a is the network latency from UI->backend.
The time it takes from b->B is the network latency from backend->UI.
Each step of request/response can calculate these and add them to
the respective request/response object.
What you cannot do with this
- You probably won't be able to precisely sync the clocks this way, there will be too much jitter.
- You can't really also tell the inbound/outbound latency. As you have no way to know the relationship in time of
A
to a
, or of B
to b
.
What you might be able to do with this
The total time seen in the UI (B - A
), less the total time seen in the backend (b - a
), should be enough to get a good estimate of the round-trip network latency.
i.e. network_latency = ((B-A) - (b-a)) / 2
Averaged over enough samples this might be good enough?
FWIW, you could just have the backend include its own "processing_time" in the response, and the UI could then store "A" in the context of the request and calculate "B-A" once a successful response comes back. The idea is the same though.