Home > Enterprise >  How does -march native affect floating point accuracy?
How does -march native affect floating point accuracy?

Time:11-16

The code I work on has a substantial amount of floating point arithmetic in it. We have test cases that record the output for given inputs and verify that we don't change the results too much. I had it suggested that I enable -march native to improve performance. However, with that enabled we get test failures because the results have changed. Do the instructions that will be used because of access to more modern hardware enabled by -march native reduce the amount of floating point error? Increase the amount of floating point error? Or a bit of both? Fused multiply add should reduce the amount of floating point error but is that typical of instructions added over time? Or have some instructions been added that while more efficient are less accurate?

The platform I am targeting is x86_64 Linux. The processor information according to /proc/cpuinfo is:

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 85
model name  : Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz
stepping    : 4
microcode   : 0x2006a0a
cpu MHz     : 2799.999
cache size  : 30976 KB
physical id : 0
siblings    : 44
core id     : 0
cpu cores   : 22
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 22
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d
bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips    : 4200.00
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

CodePudding user response:

-march native means -march $MY_HARDWARE. We have no idea what hardware you have. For you, that would be -march=skylake-avx512 (SkyLake SP) The results could be reproduced by specifying your hardware architecture explicitly.

It's quite possible that the errors will decrease with more modern instructions, specifically Fused-Multiply-and-Add (FMA). This is the operation a*b c, but rounded once instead of twice. That saves one rounding error.

CodePudding user response:

The use of FMA can both decrease and increase error, both of those may result in a testcase failing, depending on how the test works. FMA improves error "locally", but the effect may be the opposite when put in a wider context.

For example, a * c - b * d (determinant of a 2x2 matrix) famously gives some (usually minor) trouble when FMA-contracted. Without FMA, the subtraction has the potential to eliminate the rounding error, if it is the same on both sides. That does not always happen, but it can happen when a * c = b * d which is of special interest because that means the determinant should be zero. Without FMA the result would actually be zero, with FMA it won't be.

#include <math.h>
#include <stdio.h>

double determinant(double a, double b, double c, double d)
{
    return a * c - b * d;
}

int main()
{
    volatile double a = M_PI;
    double x = determinant(a, a, a, a);
    printf("%E\n", x);
    return 0;
}

This program, compiled by GCC 11.2 with optimizations enabled and FMA allowed, does not print zero, but something on the order of 1E-16.

Some variants of an "is this result close enough"-test in a unit-test would conclude that this result is, relative to zero, extremely wrong. Another way to look at it though, is that if one of the inputs changed by just 1 ULP, that would have introduced an error on the order of 1E-15, which is even worse.

Most special/new instructions either don't affect accuracy or are by default restricted. For example addsubpd and haddpd (from SSE3) are just equivalents of what would have cost more code before, and roundpd (from SSE4.1) is by default only used in ways that don't affect results (using roundpd for floor and ceil is safe, ironically using it for round itself is non-trivial due to the different halfway rounding).

  • Related