I am currently writing a small argument checking library for Java. Checks are written in a fluent interface way like this:
Check.that(name).matches("hello .*!").hasLenghtBetween(0, 20);
Check.that(list).isNullOr().hasSize(0);
Check.that(args).named("arguments").isNotEmpty();
So far the semantics of these checks is that they also implicitly assert that the argument is not null. To allow null, one can use the isNullOr()
modifier method like in the second example.
The next thing I want to add is support for check inversion like this:
Check.that(name).not().matches("hello .*!");
But now I feel that the default nullness handling becomes weird and unintuitive. The correct way of inverting the test would be to allow null now. To disallow null, one would have to prepend the isNotNull()
check explicitly:
Check.that(name).isNotNull().not().matches("hello .*!");
Because of this, I am thinking about changing the semantics so that nullness always needs to be checked explicitly. I know of one project that also does this: Bean Validation. But the downside is that this will probably make about 90% of the checks 12 characters longer, as null is often an invalid argument anyway.
So, to make a long story short: What are the arguments for and against implicit null checking? Maybe there are any other libraries or standards that do it this or the other way?