Questions tagged [pytest-benchmark]

pytest-benchmark is a pytest plugin used to collect time benchmarks for single functions. Use the tag if your question is related to the "benchmark" fixture in tests written for pytest..

pytest-benchmark is a pytest plugin which allows you to collect execution time benchmarks like mean execution time or standard deviation for single functions. The functionality is provided by the benchmark fixture, which can be called from a test with a function parameter and outputs benchmarks in textual or histogram form.

Reference:

9 questions
3
votes
0 answers

pytest-benchmark: Run setup on each benchmark iteration

I'm trying to benchmark the bundling process of our js bundles using pytest-benchmark. For accurate processing the target directory needs to empty. I've tried cleaning this on each run using the pedantic setup argument, but this only runs on…
2
votes
1 answer

using pytest.fixture on setup_method

Is it possible to use pytest.fixture on setup_method so some operation can be always finish between each testcase? I have tried to use fixture like following and the structure looks like ok. I am able to execute each testcase before funcA is…
jacobcan118
  • 7,797
  • 12
  • 50
  • 95
1
vote
1 answer

Pytest-benchmark: precomputing test data

Could you tell me please if precomputation of test input data in the same script is a valid approach, and if that would not affect the benchmarking procedure (i.e. timing)? E.g., in script.py def compute_inputs(): #some computations #return…
aaxx
  • 55
  • 1
  • 5
1
vote
0 answers

How to save the pytest-benchmaark output to a excel sheet

I want to store my pytest benchmark results into a excel file, anyone knows how to do it? python import pytest import pytest-benchmark def fun(): print("Defines a function") test_funct(): result = benchmark.pedantic(fun, args=(,),…
1
vote
1 answer

PYTEST-HTML report on test fail

My question: How to generate the html report for test script when testcase fails by pytest html plugin. As per my coverage on the topic I found this link : In this link at the bottom they have shown the image PYTEST SNAPHSOT on TEST FAIL So the…
0
votes
0 answers

pytest-benchmark directory is not getting selected

Env Details platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0 benchmark: 4.0.0 When I keep all test files ending with .py and run test , no tests are selected pytest service/microBenchmark/ All the files are located in microBenchmark…
niranjan pb
  • 1,125
  • 11
  • 14
0
votes
0 answers

Setting pytest-benchmark configurations in pytest.ini

I am trying to setup the pytest.ini configuration file such that any benchmark test runs with the "--benchmark-autosave" option as on and other tests do not. I have tried adding the option in the ini file as follows... [pytest] addopts =…
Interlooper
  • 494
  • 2
  • 5
  • 14
0
votes
2 answers

pytest & pytest-benchmark: serial or parallel?

In a single file: test.py, I have 3 test functions: test1(), test2(), test3(). Does pytest and pytest-benchmark run these 3 test cases in parallel or in serial? I have 3 files: test1.py, test2.py, test3.py. Respectively, I have a single test…
AgnosticCucumber
  • 616
  • 1
  • 7
  • 21
0
votes
1 answer

How do I test for speed with PyTest / tox?

For testing machine learning algorithms / repositories, I see three things that matter: Does it crash Does it have a minimum test accuracy Is it fast enough While (1) and maybe (2) is standard unit testing, I'm not too sure how to deal with (3).…
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958