I am trying to parse the FAILURES section from the terminal output of a Pytest session, line by line, identifying the testname and test filename for each test, which I then want to append together to form a "fully qualified test name" (FQTN), e.g. tests/test_1.py::test_3_fails
. I also want to get and save the traceback info (which is what is between the testname and the test filename).
The parsing part is straightforward and I already have working regex's that match the test name and the test filename, and I can extract the traceback info based on that. My issue with the FQTNs is algorithmic - I can't seem to figure out the overall logic to identify a testname, then the test's filename, wihch occurs on a later line. I need to accommodate for not only the tests that are in the middle of the FAILURES section, but also the first test and the last test of the FAILURES section.
Here's an example. This is the output section for all failures during a test run, along with some of the terminal output that comes right before FAILURES, and right after.
.
.
.
============================================== ERRORS ===============================================
__________________________________ ERROR at setup of test_c_error ___________________________________
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
tests/test_2.py:19: AssertionError
============================================= FAILURES ==============================================
___________________________________________ test_3_fails ____________________________________________
log_testname = None
def test_3_fails(log_testname):
> assert 0
E assert 0
tests/test_1.py:98: AssertionError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
______________________________________ test_8_causes_a_warning ______________________________________
log_testname = None
def test_8_causes_a_warning(log_testname):
> assert api_v1() == 1
E TypeError: api_v1() missing 1 required positional argument: 'log_testname'
tests/test_1.py:127: TypeError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
___________________________ test_16_fail_compare_dicts_for_pytest_icdiff ____________________________
def test_16_fail_compare_dicts_for_pytest_icdiff():
listofStrings = ["Hello", "hi", "there", "at", "this"]
listofInts = [7, 10, 45, 23, 77]
assert len(listofStrings) == len(listofInts)
> assert listofStrings == listofInts
E AssertionError: assert ['Hello', 'hi... 'at', 'this'] == [7, 10, 45, 23, 77]
E At index 0 diff: 'Hello' != 7
E Full diff:
E - [7, 10, 45, 23, 77]
E ['Hello', 'hi', 'there', 'at', 'this']
tests/test_1.py:210: AssertionError
____________________________________________ test_b_fail ____________________________________________
def test_b_fail():
> assert 0
E assert 0
tests/test_2.py:27: AssertionError
============================================== PASSES ===============================================
___________________________________________ test_4_passes ___________________________________________
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
.
.
.
Is anyone here good with algorithms, maybe some pseudo code that shows an overall way of getting each testname and its associated test filename?
CodePudding user response:
Here is my proposal to get the rendered summary for a test case report. Use this stub as a rough idea - you might want to iterate through the reports and dump the rendered summaries first, then do the curses magic to display the collected data.
Some tests to play with:
import pytest
def test_1():
assert False
def test_2():
raise RuntimeError('call error')
@pytest.fixture
def f():
raise RuntimeError('setup error')
def test_3(f):
assert True
@pytest.fixture
def g():
yield
raise RuntimeError('teardown error')
def test_4(g):
assert True
Dummy plugin example that renders the summary for test_3
case. Put the snippet in conftest.py
:
def pytest_unconfigure(config):
# example: get rendered output for test case `test_spam.py::test_3`
# get the reporter
reporter = config.pluginmanager.getplugin('terminalreporter')
# create a buffer to dump reporter output to
import io
buf = io.StringIO()
# fake tty or pytest will not colorize the output
buf.isatty = lambda: True
# replace writer in reporter to dump the output in buffer instead of stdout
from _pytest.config import create_terminal_writer
# I want to use the reporter again later to dump the rendered output,
# so I store the original writer here (you probably don't need it)
original_writer = reporter._tw
writer = create_terminal_writer(config, file=buf)
# replace the writer
reporter._tw = writer
# find the report for `test_spam.py::test_3` (we already know it will be an error report)
errors = reporter.stats['error']
test_3_report = next(
report for report in errors if report.nodeid == 'test_spam.py::test_3'
)
# dump the summary along with the stack trace for the report of `test_spam.py::test_3`
reporter._outrep_summary(test_3_report)
# print dumped contents
# you probably don't need this - this is just for demo purposes
# restore the original writer to write to stdout again
reporter._tw = original_writer
reporter.section('My own section', sep='>')
reporter.write(buf.getvalue())
reporter.write_sep('<')
A pytest
run now yields an additional section
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My own section >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@pytest.fixture
def f():
> raise RuntimeError('setup error')
E RuntimeError: setup error
test_spam.py:14: RuntimeError
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
with the stack trace rendered same way pytest
does in the ERRORS
summary section. You can play with outcomes for different test cases if you want - replace the reporter.stats
section if necessary (errors
or failed
, or even passed
- although the summary should be empty for passed tests) and amend the test case nodeid
.