Load test cases from file in PyTest
Recently, I've started using a lot of table driven tests to improve test code readibility and have pretty exhaustive test conditions.
Pytest provides a very easy way to do this. Copying example from the official doc:-
Tests v1
# content of test_expectation.py
import pytest
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
When running with -vvv
, it clearly tells you where it fails.
➜ $?=1 @arastogi-mn4.linkedin.biz pytest/parametrize [ 9:34AM] ➤ py.test -v test_expectation_v1.py
================================================ test session starts =================================================
platform darwin -- Python 2.7.13, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /export/apps/python/2.7/bin/python
cachedir: .pytest_cache
rootdir: /Users/arastogi/Playground/pytest/parametrize, inifile:
collected 3 items
test_expectation_v1.py::test_eval[3+5-8] PASSED [ 33%]
test_expectation_v1.py::test_eval[2+4-6] PASSED [ 66%]
test_expectation_v1.py::test_eval[6*9-42] FAILED [100%]
Now, this is good if our test cases are very small to represent in the test code itself. What if we had large JSONs as inputs and outputs and we want to test that. So, it's clear that we should have a way to store these test cases in a file and use that later in the test.
One way to fix this without using pytest features is, create a file by name tt.json as contents and modify our tests to read from that file and parse the json.
Create a file tc.json
➜ $?=0 @arastogi-mn4.linkedin.biz pytest/parametrize [ 9:37AM] ➤ cat tc.json | jq -c .
[{"test_input":"3+5","expected":8},{"test_input":"2+4","expected":6},{"test_input":"6*9","expected":42}]
>>> 0s elasped...
➜ $?=0 @arastogi-mn4.linkedin.biz pytest/parametrize [ 9:37AM] ➤
Tests v2
# content of test_expectation_v2.py
import json
def test_eval():
with open('tc.json') as f:
tts = json.loads(f.read())
for tt in tts:
assert eval(tt['test_input']) == tt['expected']
Problem with this approach is, pytest doesn't tell me as to which test case failed.
➜ $?=1 @arastogi-mn4.linkedin.biz pytest/parametrize [ 9:35AM] ➤ py.test -v test_expectation_v2.py
================================================ test session starts =================================================
platform darwin -- Python 2.7.13, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /export/apps/python/2.7/bin/python
cachedir: .pytest_cache
rootdir: /Users/arastogi/Playground/pytest/parametrize, inifile:
collected 1 item
test_expectation_v2.py::test_eval FAILED [100%]
A better way is to create a decorator which loads the testcases data and parametrizes the test just like pytest.parametrize
does.
Tests v3
import json
def load_tc_from_file(tc_params, tc_file):
params = [x.strip() for x in tc_params.split(',')]
def wrapper(function):
with open(tc_file) as f:
tc_data = json.loads(f.read())
tc_cases = [tuple((case[p] for p in params)) for case in tc_data]
function.tc_cases = tc_cases
function.tc_params = tc_params
return function
return wrapper
def pytest_generate_tests(metafunc):
if getattr(metafunc.function, 'tc_cases', None):
metafunc.parametrize(metafunc.function.tc_params, metafunc.function.tc_cases)
@load_tc_from_file('test_input, expected', './tc.json')
def test_eval(test_input, expected):
assert eval(test_input) == expected
This clearly tells us that we failed on 3rd test case.
➜ $?=1 @arastogi-mn4.linkedin.biz pytest/parametrize [10:49AM] ➤ py.test -vvv test_expectation_v3.py
================================================ test session starts =================================================
platform darwin -- Python 2.7.13, pytest-3.8.0, py-1.6.0, pluggy-0.7.1 -- /export/apps/python/2.7/bin/python
cachedir: .pytest_cache
rootdir: /Users/arastogi/Playground/pytest/parametrize, inifile:
collected 3 items
test_expectation_v3.py::test_eval[8-3+5] PASSED [ 33%]
test_expectation_v3.py::test_eval[6-2+4] PASSED [ 66%]
test_expectation_v3.py::test_eval[42-6*9] FAILED [100%]