Home > database >  Understanding fixed-point notation in GCC/Clang
Understanding fixed-point notation in GCC/Clang

Time:12-14

I am trying to wrap my head around how the fixed-point notation works. I have read the documentation from https://www.open-std.org/jtc1/sc22/wg14/www/docs/n996.pdf but still cannot quite grab it.

Here is an example from Clang's unit tests:

https://github.com/llvm/llvm-project/blob/main/clang/test/Frontend/fixed_point_div_const.c

https://github.com/llvm/llvm-project/blob/main/clang/test/Frontend/fixed_point_div.c

A small snippet looks like this:

// Division between different fixed point types
short _Accum sa_const = 1.0hk / 2.0hk;

How should I think of these division operators? Are they where the decimal point is? If so, is there a way I can visually verify that binary representation (i.e. by printing out the values) somehow?

CodePudding user response:

How should I think of these division operators?

You should think of these divisions as unit tests to check that division works.

There's probably more unit tests somewhere (e.g. doing things like short _Accum sa_const2 = 0.5hr * 2.0hk; to test that multiplication works, and short _Accum sa_const2 = 0.5hr 2.0hk; to check that addition works, and...).

Are they where the decimal point is?

No. The decimal point is determined by the variable's type. E.g. a short _Accum is described as (at least - see note) "s4.7" which means it has 1 sign bit, 4 integer bits and 7 fractional bits; and short _Accum myValue = 3.5hr would be equivalent to short myValue = (3.5) * (1 << 7);.

Note: Just like how C originally got normal variable types wrong/implementation dependent (e.g. so nobody can be sure if int is 16 bits or larger, and portable software written/tested on larger computers breaks due to bugs on smaller computers) which is a problem they "corrected" with the introduction of better types later (e.g. int_least16_t); they've repeated the same mistake by making fixed point types equally wrong/implementation dependent.

Of course this (the position of the decimal point determined by the variable's type) is also what makes the proposed/draft fixed point support "likely unusable for it's intended purpose". Specifically; people that care are also likely to care about getting the best range and precision compromise; which means they need the result of each operator to (potentially) be a different fixed point type - e.g. like "s4.7 s4.7 = s5.6" (because addition/subtraction causes you to need 1 more bit to avoid overflow) and like "s4.7 * s4.7 = s8.3"; and like "(s4.7 s4.7) * s4.7 = s5.6 * s4.7 = s9.2" (or maybe "s5.6 * s4.7 = s9.10" if you want to use more bits to preserve precision).

is there a way I can visually verify that binary representation (i.e. by printing out the values) somehow?

I'm guessing you can cast it to a raw unsigned integer type and print that.

  • Related