Home > Software design >  Get time difference (in Milliseconds accuracy) from remote server API
Get time difference (in Milliseconds accuracy) from remote server API

Time:10-30

Let's say from current server at timestamp 1600000000.123 (123 is milliseconds part) we do initialize ping to external server's API server-time endpoint, which says that server-time at that moment (when request reached there) was 1600000000.555.

However, of course there was latency before request reached the server and moreover, when the response was obtained by our request. When server responded .555, maybe exactly at that moment (parallelly) on our server exactly was already 1600000000.602 or whatever any different time).

So, how to determine, what time was on the originator server, when the remote API responded that answer .555. Should we check headers or which approach should we use, to find out the milliseconds difference at best possible accuracy.

(p.s. language doesn't matter much -be it php or node-js)

CodePudding user response:

You're using the internet here, a network of connected IP networks, to communicate from your client to the server and back again. As you know, IP networks aren't deterministic in their timing. So there are many unknowns.

A good way to do what you ask is to assume two things.

  1. the first time your client hits the remote API there'll be some connection setup time. For example, it takes some time to set up https connections.

  2. after the first hit to the API, it takes about the same amount of time for your request to arrive at the network and the response to arrive back.

Given those assumptions, you can try this.

  • hit the API twice and ignore the results (to make sure the connection is set up).

  • make a note of the local time. local_start_time

  • hit the API again.

  • when you get the response, make a note of the local time again. local_end_time.

  • assume that the server responded halfway between your start_time and end_time.

    actual_server_time = reported_server_time 
                            (local_end_time - local_start_time) * 0.5 
    

This is close to the best you can do.

Another tip: If the server time (in UTC) is not within a few seconds of your client time, issue a warning message. If you trust your server's administrators, warn your user that their local machine is not synchronized using the Network Time Protocol. Almost all internet-connected operating systems handle this time synchronization automatically these days.

If you don't trust your server's administrators, your warning will necessarily be a bit more vague.

A lot of authorization / authentication tech (Javascript Web Tokens, SAML, Google Authenticator style TOTP codes, and on and on) relies on time synchronization.

(I once had a customer try to integrate with a SAML authentication protocol from a non-synchronized machine set to some random time of day. My SAML implementation correctly rejected the incoming auth tokens because they were from several minutes in the future. So I added the warning message. It asked my customers to use their OS Date & Time settings screen to sync up. The problem hasn't recurred since.)

  • Related