Home > Mobile >  CMake and GoogleTest weird behaviour with comparison when changing build type from Debug to Release
CMake and GoogleTest weird behaviour with comparison when changing build type from Debug to Release

Time:08-24

Context

I am writing a function which calculates some exponential value for a timer application. It simply takes 2^x until some threshold maxVal, in which case the threshold should be returned. Also, in all cases, edge cases should be accepted.

util.cpp:

#include "util.h"
#include <iostream>

int calculateExponentialBackoffDelay(int x, int maxVal)
{
    int y;
    
    if(x <= 0 || maxVal <= 0) return maxVal;
    else if (x > maxVal) return maxVal;

    y = std::pow(2, x);
    std::cout << "y = " << y << std::endl;
    
    if(y > maxVal) return maxVal;
    else if(y < 0) return maxVal;
    else return y;
}

Now I make a CMake configuration with a GoogleTest dependency fetch.

CMakeLists.txt:

cmake_minimum_required(VERSION 3.14)
project(my_project)

# GoogleTest requires at least C  14
set(CMAKE_CXX_STANDARD 14)
set(CMAKE_BUILD_TYPE Release)

include(FetchContent)
FetchContent_Declare(
  googletest
  GIT_REPOSITORY https://github.com/google/googletest.git
  GIT_TAG release-1.12.1
)
# For Windows: Prevent overriding the parent project's compiler/linker settings
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(googletest)

enable_testing()
include_directories(
    ${CMAKE_CURRENT_LIST_DIR}/
    )
# Add the main source compilation units
add_executable(
  test_calc
  test_calc.cpp
  util.cpp
)
target_link_libraries(
  test_calc
  GTest::gtest_main
)

include(GoogleTest)
gtest_discover_tests(test_calc)

I run a test on the function that I have written which tests some boundary conditions. One of which is testing that if 2^x > maxVal, then maxVal should just be returned (because the result of the 2^x is above the maximum value). This is the threshold.

test_calc.cpp:

#include "util.h"
#include <climits>
#include <gtest/gtest.h>

TEST(util_tests, ExponentialBackoff)
{
    int x, maxVal, res;

    // Test x = maxVal and maxVal = 1000
    // Expected output: 1000
    maxVal = 1000;
    x = maxVal;
    EXPECT_EQ(maxVal, calculateExponentialBackoffDelay(x, maxVal));
}

When I set x and maxVal to 1000, 2^(1000) is calculated and because it's such a big number, there is an overflow / wraparound to a really negative number (-2147483`648). That is expected, and therefore my test will expect the result of x=1000, maxVal=1000 to be < 0.

Problem

This is where things go unexpectedly. I run ctest inside my build directory and all test cases pass. Then I change one line in CMakeLists.txt from ... :

set(CMAKE_BUILD_TYPE Debug)

... to ...

set(CMAKE_BUILD_TYPE Release)

... and that test case fails:

1: Expected equality of these values:
1:   maxVal
1:     Which is: 1000
1:   calculateExponentialBackoffDelay(x, maxVal)
1:     Which is: -2147483648

So for some reason, the case where y < 0 inside the function body is not being reached and instead, we are returning the wraparound result.

Why is this? What am I doing wrong? I tried using the Linux strip -s test_calc to check if it was a symbols thing while keeping the debug configuration in CMake, only to find that test cases still pass. What else does CMake do to change the comparison behaviour of the resulting binary?

CodePudding user response:

Floating-integral conversions

  • A prvalue of floating-point type can be converted to a prvalue of any integer type. The fractional part is truncated, that is, the fractional part is discarded. If the value cannot fit into the destination type, the behavior is undefined. - This is it.

The behavior is undefined. Any expectations for results of int y = std::pow(2, x) with x > 31 is considered to be invalid and may lead to any result of the function calculateExponentialBackoffDelay.

In this particular case, compilers know y = std::pow(2, x) is always greater than 0 for valid values of x and drop the branch if (y < 0) off.

CodePudding user response:

When changing build type changes result, your first thought should always be "Undefined Behaviour somewhere in the code". And this is indeed the case. Converting double (result of pow) to integer type with value that is out of range for that integer type is Undefined Behaviour.

Quote from cppreference:

A finite value of any real floating type can be implicitly converted to any integer type. Except where covered by boolean conversion above, the rules are:

  • The fractional part is discarded (truncated towards zero).
    • If the resulting value can be represented by the target type, that value is used
    • otherwise, the behavior is undefined

2 to power 1000 is roughly 1e301, which is well beyond any possible range for int type, so converting it in y = std::pow(2, x); is definitely an UB. And UB on modern compilers is very hard to reason about.


Here's one attempt at reasoning it out anyway:

  1. Compilers can optimize code based on assumption that there is no UB in the code.
  2. Unless there is UB, the only valid outcomes for y = std::pow(2, x); are positive integers (because we get exponent of positive integer, it can only be 0 or positive)
  3. There can be no UB in the program (point 1), so condition if (y < 0) is always false (point 2) and can be optimized away.

But it's just my guess at what happens, it may be correct or completely wrong. Compiler is allowed to do absolutely anything with code that contains any UB at all.

  • Related