I'm attempting to write a unit test using PyTest
and Mock
that tests that the correct arguments are being passed to json.dump
The function that houses the json.dump accepts 1 argument that is a list of ids
that is used to make queries to the db. For each id
2 calls are made to retrieve tables that contain that same id. Then that information is formatted into a dictionary. At the end of the list, I pass the final dictionary to json.dump like so:
with open(
"./src/bloomberg/ecdpmm/bmpmsvc/read/serialize_change_set/serialize_change_set.json",
"w",
encoding="utf-8",
) as serialized_file:
json.dump(response_json, serialized_file, default=object_handler, sort_keys=True, indent=4)
What confuses me is how, in another file, I can write a test cases asserts the correct arguments are being passed. Like I've seen code snippets that do something along the lines of:
json = Mock()
json.loads('{"key": "value"}')
>> <Mock name='mock.loads()' id='4550144184'>
json.loads.assert_called_with('{"key": "value"}')
When doing the assertion
are the arguments (that I am mocking) being compared to the arguments being passed in my main file ? Are the functions above the correct approach to what I am trying to do ?
Any guidance/tips would be greatly appreciated!
Here is the function I am trying to test:
def serialize_change_set(change_set_ids: list[int] | None = None) -> None:
"""
Get a list of altered tables in the change set and serialize them
"""
logger.info("serialize all the tables in the change set ")
response_json = {}
for cs_id in change_set_ids:
response_json[cs_id] = {"Changed_Tables": {}}
description = db.get_change_set_description(cs_id)
response_json[cs_id]["Description"] = description
altered_tables = db.get_change_set(cs_id)
for table_name, table_rows in altered_tables.items():
formatted_rows = [dict(row) for row in table_rows]
response_json[cs_id]["Changed_Tables"][table_name] = formatted_rows
with open(
"serialize_change_set.json",
"w",
encoding="utf-8",
) as serialized_file:
json.dump(response_json, serialized_file, default=object_handler, sort_keys=True, indent=4)
CodePudding user response:
If you have a module (say, my_module.py
) that does this:
import json
def my_function(response_json):
with open(
"serialize_change_set.json",
"w",
encoding="utf-8",
) as serialized_file:
json.dump(
response_json,
serialized_file,
sort_keys=True,
indent=4,
)
Then you can write a test like this that mocks out json.dump
and
then later tests to see how it was called. In test_my_module.py
we
have:
import my_module
from unittest import mock
def test_my_function():
test_data = {"some": "value"}
with mock.patch("my_module.json"), mock.patch(
"my_module.open", mock.mock_open()
) as _open:
my_module.my_function(test_data)
my_module.json.dump.assert_called_with(
test_data, _open(), sort_keys=True, indent=4
)
We have to mock out both json.dump
and the builtin open
function
(because (a) we don't want our unit tests to create files, and (b) we
want to be able to match the value returned by open
in our call to
assert_called_with
).
Our test passing:
$ pytest -v
========================================== test session starts ==========================================
platform linux -- Python 3.10.4, pytest-6.2.4, py-1.11.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/lars/tmp/python
plugins: anyio-3.5.0
collected 1 item
test_my_module.py::test_my_function PASSED [100%]
=========================================== 1 passed in 0.02s ===========================================
CodePudding user response:
ISTM that the best possible test for arguments to json.dump() is to run them through _ = json.dump(*args, **kwargs)
to see if it raises an exception.
Since the two functions have the same internal logic, the dumps() function can serve as a validator for dump without having to write to disk.
Put that is a validation function and use Mock to attach the call.