3

I have a module to test, module includes a serie of functions / simple classes. Wondering if there any attempts(ie package) to generate automatically:

1) Generate Python code from initial Python file containing function definition.

2) This code list of call to the functions with random/parametric data as parameters.

It is technically feasible by using inspect and python meta classes, usually limited to numerical type functions....(numpy array).

Because string (ie url input) would be impossible (only parametrized...).

EDIT: By random, it means obviously "parametric random".

Suppose we have

  def f(x1,x2,x3) 

  For all xi of f
   if type(xi) = array1D -> 
       Do those tests: empty array, zeros array, negative array(random),   
      positivearray(random), high values, low values, integer array, real  
      number array, ordered array, equal space array,..... 

   if type(xi)=int -> test  zero, 1, 2,3,4, randomValues, Negative

Do people think such project is possible using inspect and meta class? (limited to numpy/numerical items).

Suppose you have a very large library..., things can be done in background.

  • Is that really direction you want to go with testing - building the tests from the functions? In test-driven development, you create the tests, and design the functions to work correctly in those tests. Random data doesn't do a good job of rooting out problems. Problems usually occur in the edge cases - empty arrays, 0 values, nan values, unexpected values. – hpaulj Feb 18 '16 at 23:48
  • Hi, By random, it means obviously "parametric random". Let's take an example func(x1,x2,x3) , if type(xi) = array1D -> Do those tests: empty array, zeros array, negative array(random), positivearray(random), high values, low values, integer array, real number array,..... –  Feb 19 '16 at 04:05

2 Answers2

1

You might be thinking of fuzz testing, where a bunch of garbage data is submitted to a function to see if anything makes it behave badly. It sounds like the Hypothesis library will let you generate different test cases based on some parameters.

Don Kirkby
  • 53,582
  • 27
  • 205
  • 286
  • Hypothesis looks closer to the question. 2 items are missing : 1) This is not garbage data: this is parametrized data, data follows some boundaries. 2) Result of the output : crash, error, result.... can be analyzed to guess what the function is doing 3)behave badly is only a side effect/consequence.... –  Feb 19 '16 at 08:41
  • Just with Hypothese people, Hypothese, you need to find and put an invariant of the function. Everything is checked againts this invariant. –  Feb 19 '16 at 13:27
0

I spent searching, it seems this kind of project does not really exist (to my knowledge):

Technically, this is a mix of packages (issues):

Hypothese : data generation for input, running the code with crash/error. (without the invariant part of Hypothese)

Jedi: Static analysis of code/Inference of the type Type inference is a difficult issue in Python (in general) implementing type inference

If type is num/array of num: Boundary exists/ typical usage is clearly defined

If type is string: Inference is pretty difficult without human guessing.

Same for others, Context guessing is important

Community
  • 1
  • 1