I don't know of any known protocol (perhaps there is) but can offer a method similar to the way scientists measure speed near that of light:
Have a process on both servers "pinging" the other server and waiting for a response, and timing the time it took for a response. Then start pinging periodically exactly when you expect the last ping to come in. Averaging (and discarding any far off samples) you will have the two servers after a while "thumping away" at the same rhythm. The two can also tell how much time between each "beat" is taking for each of them at a very high accuracy, by dividing the count of beats in the (long) period of time.
After the "rhythm" is established, if you know that one of the server's time is correct, or you want to use its time as the base, then you know what time it is when your server's signal reaches the other server. Together with the response it sends you what time IT has. You can then use that time to establish synchronization with your time system.
Last but not least, most operating systems give the non-kernel user the ability to act only in at least 32 milliseconds of accuracy, that is: you cannot expect something to happen exactly within less milliseconds than that. The only way to overcome that is to have a "native" DLL that can react and run with the clock. That too will give you only a certain speed of reaction, depending on the system (hardware and software).
Read about Real-Time systems and the "server" you are talking about (Windows? Linux? Embedded software on a microchip? Something else?)