4

I need to parallelize my python unit-tests which I wrote using the default unittest module. I'm trying to decide between two approaches:

  1. keep using unittest but use a custom 'multiprocess' runner which can spawn a test using Platform LSF (remote execution tool) an example can be found in the TestOOB project.
  2. Use py.test and customize the pytest-xdist plugin to run LSF instead of SSH.

I lean towards #1 since I already have a working setup and particularly the test suite generation (which is using a generator that parse an excel spreadsheet and is not trivial)

Any recommendation on a particular approach to follow ?

Note: my company is using LSF and I must use it for resource sharing with other teams.

sherve
  • 300
  • 2
  • 10

1 Answers1

1

i'd first try to simply install "pytest-xdist" from pypi and run your existing tests with "py.test -n 5" in 5 parallel processes. If that basically works it means that modifying pytest-xdist to use the LSF transport is a worthwhile option. Effectively you will need to look into extending execnet (http://codespeak.net/execnet) which is the underlying library for distributing execution. HTH.

hpk42
  • 21,501
  • 4
  • 47
  • 53
  • Thanks for the suggestion. Since it's been a while I already implemented it. I follow a similar approach as you are saying, distributing locally before plugging LSF into the mix. I ended up going for option 1. I switched my test framework from regular unittest module to the TestOOB framework. I then wrote a customized 'runner' to run tests over LSF. I pass the test data across machines using Pyro objects. – sherve Nov 09 '11 at 16:41