3

I'm starting to use common test as my test framework in erlang.

Suppose that I have function I expect to accept only positive numbers and it should blows in any other case.

positive_number(X) when > 0 -> {positive, X}.

and I want to test that

positive_number(-5).

will not complete successfully.

How do I test this behaviour? In other languages I would say that the test case expects some error or exception and fails if it function under test doesn't throws any error for invalid invalid parameter. How to do that with common test?

Update:

I can make it work with

test_credit_invalid_input(_) ->
  InvalidArgument = -1,
  try mypackage:positive_number(InvalidArgument) of
    _ -> ct:fail(not_failing_as_expected)
  catch
    error:function_clause -> ok
  end.

but I think this is too verbose, I would like something like:

assert_error(mypackage:positive_number, [-1], error:function_clause)

I assuming that common test has this in some place and my lack of proper knowledge of the framework that is making me take such a verbose solution.

Update: Inspired by Michael's response I created the following function:

assert_fail(Fun, Args, ExceptionType, ExceptionValue, Reason) ->
  try apply(Fun, Args) of
    _ -> ct:fail(Reason)
  catch
    ExceptionType:ExceptionValue -> ok
  end.

and my test became:

test_credit_invalid_input(_) ->
  InvalidArgument = -1,
  assert_fail(fun mypackage:positive_number/1,
              [InvalidArgument],
              error,
              function_clause,
              failed_to_catch_invalid_argument).

but I think it just works because it is a little bit more readable to have the assert_fail call than having the try....catch in every test case.

I still think that some better implementation should exists in Common Test, IMO it is an ugly repetition to have this test pattern repeatedly implemented in every project.

Jonas Fagundes
  • 1,519
  • 1
  • 11
  • 18

1 Answers1

3

Convert the exception to an expression and match it:

test_credit_invalid_input(_) ->
    InvalidArgument = -1,
    {'EXIT', {function_clause, _}}
            = (catch mypackage:positive_number(InvalidArgument)).

That turns your exception into a non exception and vice versa, in probably about as terse a fashion as you can expect.

You could always use a macro or function too though to hide the verbosity.

Edit (re: comment):

If you use a full try catch construct, as in your question, you lose information about any failure case, because that information is thrown away in favour of the atom 'not_failing_as_expected' or 'failed_to_catch_invalid_argument'.

Trying an expected fail value on the shell:

1> {'EXIT', {function_clause, _}}
        = (catch mypackage:positive_number(-4)).             
{'EXIT',{function_clause,[{mypackage,positive_number,
                                 [-4],
                                 [{file,"mypackage.erl"},{line,7}]},
                      {erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,661}]},
                      {erl_eval,expr,5,[{file,"erl_eval.erl"},{line,434}]},
                      {erl_eval,expr,5,[{file,"erl_eval.erl"},{line,441}]},
                      {shell,exprs,7,[{file,"shell.erl"},{line,676}]},
                      {shell,eval_exprs,7,[{file,"shell.erl"},{line,631}]},
                      {shell,eval_loop,3,[{file,"shell.erl"},{line,616}]}]}}

Truing an expected success value on the shell:

2> {'EXIT', {function_clause, _}} = (catch mypackage:positive_number(3)). 
** exception error: no match of right hand side value {positive,3}

In both cases you get a lot of information, but crucially, from both of them you can tell what parameters were used to call your function under test (albeit in the second case this is only because your function is determinate and one to one).

In a simple case like this these things don't matter so much, but as a matter of principal, it's important, because later with more complex cases where perhaps the value that fails your function is not hard coded as it is here, you may not know what value caused your function to fail, or what return value. This might then the difference between looking at an ugly backtrace for a moment or two and realising exactly what the problem is, or spending 15 minutes setting up a test to really find out what's going on... OR, worse still, if it's a heisenbug you might spend hours looking for it!

Michael
  • 3,639
  • 14
  • 29
  • This solution works but produces a more confusing error message on test failures the my previous solution. I liked the helper function idea. – Jonas Fagundes Nov 16 '15 at 02:58
  • @JonasFagundes You can include the EUnit macros and use those just fine under Common Test. – Adam Lindberg Nov 16 '15 at 10:14
  • @JonasFagundes Does the confusing error message really matter? I know Erlang stack traces aren't particularly pretty, but mainly because Erlang is functional you'll know which function failed, where and why from it... where as if you use a try catch as in your question you actually throw away all that information and replace it with 'not_failing_as_expected'. I understand your urge to do this to some degree but I think it's really the wrong way to go. I'll add this to the answer to make it clear, can't do it in a comment really. – Michael Nov 16 '15 at 11:06
  • @JonasFagundes You can of course adapt your chosen helper function solution to incorporate this too, if you don't mind adding a lot of confusing verbosity to your error. – Michael Nov 16 '15 at 11:28
  • @Michael I understand your point but in a test case having too much information is bad as not having enough information, finding what the right amount is it is the tricky part :) The `ct:fail` can receive a format string and args but I decided that having a tag plus the test case name in the error report was enough in this case. It never lost information in the system under test only the test case throws away unnecessary information (it will generate an `error:function_clause` what I don't think is not a good practice, I will branch the function to catch it and make a proper error message). – Jonas Fagundes Nov 16 '15 at 13:27