I'm quite new to network programing.
I need to send a latitude longitude pair composed of doubles over a network. I am coding both ends so I can use whatever method I like. At present. I'm converting both directly to bytes and sending the whole 16 bytes.
What I was wondering was, if it might be better to divide each by the maximum possible value (90 and 180 respectively) then multiply by Int32.MaxValue then convert to an int before sending. Obviously this would halve the data sent but I can't work out how much precision I would loose.
Does anyone know how to calculate this?
CodePudding user response:
You drop from 53 (type double/binary64 significand Wikipedia IEEE_754) to 32 (type integer) bits - a loss of 21 bits or roughly 6 decimal digits (2^21 ~= 2,000,000) of the 15-16 decimal digits that doubles represent.
What precision is required in your application? Are your latitude and longitude coordinates in degrees? At the equator 1 degree (of latitude or longitude) is roughly 110km so 3 significant digits gets you to a resolution of ~100km, 5 digits ~1km, 8 digits ~1m, 10 digits (the best you can hope for with your transform) ~10cm.
This is just rule-of-thumbing by orders of magnitude to give you an idea. If your coordinates must resolve to less than 10m you should proceed with caution.