Home > front end >  How does the choice of units impact numerical precision?
How does the choice of units impact numerical precision?

Time:10-13

I write science-related codes with Python and I was wondering how the choice of units may affect my results. For example, if I enter a distance as 1e-9 meters or 1 nm or 10 Angstroms, I would obtain the exact same result on paper. However, I know that the representation of these quantities is different on a computer. Therefore, I would like to know how important it is to choose the relevant set of units in scientific computing to maximize numerical precision.

Thank you.

CodePudding user response:

How does the choice of units impact numerical precision?

I suggest you get as first step clarity what 'numerical precision' actually means with the side-effect of accepting the statement It doesn't affect it at all provided by Tim Roberts in the comments as short, clear and simple answer.

Usually you choose numerical precision yourself in your code by choice of the data types storing values and the way you perform calculations on these values.

The choice of units is just a choice of units and choice of data types for numerical representation of values in this units is another story.

In other words you have to know first what you actually want to do and how to achieve the results you expect.

Let's for example consider following code:

x     = 1.0
dx    = 0.000_000_000_000_000_1
steps = 1_000_000

for i in range(steps):
    x  = dx
print(x)

x  = 1.0
x  = sum([dx]*steps)
print(x)

x  = 1.0
x  = dx*steps
print(x)

printing:

1.0
1.0000000001
1.0000000001

as evidence that the choice of the way of performing calculations is the main issue when you experience surprising results and not the numerical precision or choice of units as such.

  • Related