Home > Software design >  Obtaining 17 digits precision of Julian datetime in C
Obtaining 17 digits precision of Julian datetime in C

Time:09-26

I am trying to convert some JavaScript code to C for obtaining Julian datetime with 17 digits precision. The JS code is able to give me this precision, but its similar code in C is not giving value more than 7 digits. This 17 digit precision is absolutely needed because it helps to find Altitude and Azimuth of celestial bodies in realtime with a greater precision.

Here is the JS code.

function JulianDateFromUnixTime(t){
    //Not valid for dates before Oct 15, 1582
    return (t / 86400000)   2440587.5;
}

function setJDToNow(){
    const date=new Date();
    const jd=JulianDateFromUnixTime(date.getTime());
    document.getElementById("jd").value=jd;
}

Calling this in HTML code as below <tr><td align=right>Julian Date:</td><td><input type=text id="jd" value="2459349.210248739"></td><td><input type=button value="Now" onclick='setJDToNow();'></td></tr> gives the value 2459349.210248739


Here is the C code

#include <chrono>
#include <cstdint>
#include <iostream>

uint64_t timeSinceEpochMillisec() {
  using namespace std::chrono;
  return duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count();
}

uint64_t JulianDateFromUnixTime(uint64_t t){
    //Not valid for dates before Oct 15, 1582
    return (t / 86400000)   2440587.5;
}

int main() {
  std::cout << JulianDateFromUnixTime(timeSinceEpochMillisec()) << std::endl;
  return 0;
}

This gives 2459848 as the value.

Question: How do I get 17 digits of precision?

Note: The version of GCC I am using is MSYS2-MINGW-64 GCC 12.1.0

CodePudding user response:

At first look, I see three issues here:

  1. Your Julian Date is a floating point number, so the result of your function should be double, not uint64_t which is an unsigned integer;

  2. You want t / 86400000 to be a floating point division, not an euclidian one which discards the fractional part. There are several ways to do that, the easiest, is to divide by a double, so t / 86400000.0. Some may consider that too subtle and thus prefer double(t) / 86400000.0 or even static_cast<double>(t) / 86400000.0.

  3. Even if you return a double, the display format won't be the one you desire. You should set it with std::fixed and std::setprecision.

Edit: I forgot that the most common double format has 53 bits of precision so about 15 decimal digits. You won't be able to have easily and portably more (some implementations have their long double with 18 decimal digits of precision or more, others have it the same representation as double). AFAIK, Javascript is using the same format as double for all its numbers, so you aren't probably losing anything in the conversion.

  • Related