If I do nothing else but this:
let projection = d3.geoAzimuthalEqualArea()
console.log(projection([-3.0026, 16.7666])) // passing long/lat
I get [473.67353385539417, 213.6120079887163]
, which are pixel coordinates.
But pixel coordinates relative to WHAT? I haven't even defined the size of the svg container, the center of the map etc. How can it already know the exact pixel position on the screen if I haven't even specified how big the map will be on the page? If it's a bigger map, the point will be further to the right and down, which means a different pixel value.
Also why are those returned values floats with many decimal places if screen pixels can only ever be whole numbers?
And are there maximum values for these pixel coordinates? (I.e. is this projected on some kind of a default sized map?)
Where is this in the API documentation?
CodePudding user response:
The documentation is not very clear about the default dimensions of a projection's output. Somewhat hidden you will find the value in the section on projection.translate()
:
# projection.translate([translate])
[…]
The default translation offset places ⟨0°,0°⟩ at the center of a 960×500 area.
Those 960×500 are the dimensions of the 2D-plane the geographic data is projected upon.
The output of a projection is the result of a sequence of mathematical operations on the input values, i.e. the geographical data. Floating-point math is used to get as close to the exact values as possible. No assumption is made about the actual use of that result. Although those coordinates may be interpreted as pixel coordinates by simply rounding the values their use is not restricted to this interpretation. Especially, when it comes to vector formats like SVGs, it is quite normal to have floating point coordinate values with the user client, e.g. the browser, doing the calculations to further project those vectors onto the screen taking into account possible translations, rotations, skews, view boxes etc.