Home > Software engineering >  Visual Studio's gtest unit calculates incorrect code coverage
Visual Studio's gtest unit calculates incorrect code coverage

Time:03-06

I try to use gtest in visual studio enterprise 2022 and generate code coverage.

// pch.h
#pragma once
#include <gtest/gtest.h>

// pch.cpp
#include "pch.h"

// test.cpp
#include "pch.h"
int add(int a, int b) {
    return a   b;
}
TEST(a, add) {
    EXPECT_EQ(2, add(1, 1));
}

This image is my test coverage report: This image is my test coverage report.

Such a minimalist code. I think its code test coverage should be 100%. But in reality it's only 26.53%. I think it might be because a lot of stuff in the header file "gtest/gtest.h" is not executed. Please tell me how to write a hello world project with 100% coverage.

CodePudding user response:

You should write tests in dedicated files and exclude test sources from analysis by test coverage tools. Unit tests are not subjects of any test coverage tools by their nature.

// First file, a subject for a tool
int add(int a, int b) {
    return a   b;
}
// Second file, excluded from analysis by a tool
TEST(a, add) {
    EXPECT_EQ(2, add(1, 1));
}

TEST produces a lot of code that include several conditional branches, exception handlers and other stuff.

EXPECT_EQ produces at least two branches if (2 == add(1, 1) ... else ....

Of course add(1, 1) gives a single result and is unable to cover all branches in the unit test.

CodePudding user response:

Problem is that you do not understand what are you seeing.

If you keep production code (int add(int a, int b)) in separate source file, then test it will be easier to interpret. Note that you are interested only in coverage of production code.

Also there is a view which shows which lines and branches are covered by test by marking lines in source file. This view is easier to interpret.

Now there is code which is not covered since Catch2 has extra code which covers different scenarios: to filter test, repeat them, print reports in case of errors and so on. Since you put everything in single file this statistic covers also that. If you combine this with fact that your tested code is extremely simple (noe many lines), statistic is dominated by code coverage of test itself.

So basically this is kind of false positive.

  • Related