4

The question Which @NotNull Java annotation should I use? is outdated and somewhat opinion based. Since then Java 8 came, along with newer IDEs.

While Java 8 allows type annotations by integration of JSR 308, it does not come with any. From JSR 308 Explained: Java Type Annotations by Josh Juneau:

JSR 308, Annotations on Java Types, has been incorporated as part of Java SE 8.
...
Compiler checkers can be written to verify annotated code, enforcing rules by generating compiler warnings when code does not meet certain requirements. Java SE 8 does not provide a default type-checking framework, but it is possible to write custom annotations and processors for type checking. There are also a number of type-checking frameworks that can be downloaded, which can be used as plug-ins to the Java compiler to check and enforce types that have been annotated. Type-checking frameworks comprise type annotation definitions and one or more pluggable modules that are used with the compiler for annotation processing.

Considering only solutions that offer at least some kind of @CanBeNull and @CannotBeNull, I've found information on the following (could be wrong):

Some are used in static code analysis, some in runtime validation.

What are the practical differences between the above options? Is there (going to be) a standard or is it intended for everyone to write their own analysis framework?

Community
  • 1
  • 1
user1803551
  • 12,965
  • 5
  • 47
  • 74

2 Answers2

5

The information you collected pretty much describes it already:

Static analysis based on type annotations (JSR 308) is indeed much more powerful than previous approaches.

Two sets of annotations use JSR 308, both for the sake of performing static analysis (could also be considered as advanced type checking). At the core the two tools promoting these annotations are essentially compatible (and each can also consume the annotations of the other). Differences that I know of are mainly in two areas:

  • IDE integration.
  • Interpretation of unannotated types. In a strict world, every type is either nonnull or nullable, so if an annotation is missing it could be interpreted as nonnull by default. Alternatively, the underlying type system could use a notion of "legacy types", raising warning when "unchecked conversions" are needed (similar to the combination of generic types and raw types). To the best of my knowledge the Checkers Framework applies the strict approach, whereas Eclipse lets you choose between a @NonNullByDefault strategy and admitting "legacy types" (for the sake of migration).

Also to the best of my knowledge nobody is planning to invest into standardization of these annotations at the moment.

Stephan Herrmann
  • 7,963
  • 2
  • 27
  • 38
  • Your comment about unannotated types is misleading. The Checker Framework accommodates unannotated code and offers flexible handling of legacy types. It permits best-case or worst-case assumptions about libraries, making the default be nullable or non-null, enabling or disabling warnings, annotating third-party libraries without modifying their source code, and much more -- and all with a well-defined semantics. – mernst Aug 27 '16 at 01:52
  • @mernst I certainly didn't want to mislead anybody. I just answered to the best of my knowledge. Apparently this knowledge was outdated. Sorry about that. – Stephan Herrmann Aug 27 '16 at 16:56
5

Some other nullness analyses exist besides the ones you mentioned; for example, IntelliJ contains a nullness analysis.

Here are some key questions to ask about a nullness analysis:

  • Does it work at compile time or run time? A compile-time analysis gives the programmer advance warning about potential bugs. With a run-time analysis, your program still crashes, but perhaps it crashes earlier or with a more informative error message.

  • Is it a verifier or a bug finder? A verifier gives a correctness guarantee: if the tool doesn't report any potential errors, then the program will not suffer the given error at run time. A bug finder reports some problems, but if it doesn't report any problems, your program might still be wrong. A verifier usually requires more work from the programmer, including annotating the program. A bug finder can require less effort to start using, since it can run on an unannotated program (though it may not give very good results in that case).

  • How precise is the analysis? How often does it suffer false alarms, or issuing a warning when the program is actually correct? How often does it suffer missed alarms, or failing to notify you about a real bug in your program?

  • Is the tooling built into an IDE? If so, it may be easier to use. If not, it can be used by any programmer rather than just ones who use that particular IDE.

The three tools you mentioned all work at compile time. FindBugs is a bug finder, and the others are verifiers. The Checker Framework has better precision, but the other two have better IDE integration. FindBugs doesn't work with Java 8 type annotations (JSR 308); both of the others support both Java 8 and pre-Java-8 annotations. All of these tools have their place in a programmer's toolbox; which one is right for you depends on your needs and goals.

To answer some of your other questions:

FindBugs's annotations use the javax domain because its designer hoped that Oracle would adopt FindBugs as a Java standard (!). That never happened. You are right that the use of javax confuses many people into thinking that it is official or favored by Oracle, which it is not.

Is there (going to be) a standard or is it intended for everyone to write their own analysis framework?

For now, Oracle wants the community to experiment with creating and using a variety of analysis frameworks. They feel that they don't yet understand the pros and cons of the various approaches well enough to create a standard. They don't want to prematurely create a standard that enshrines a flawed approach. They are open to creating a standard in the future.

mernst
  • 7,437
  • 30
  • 45
  • Can you explain what you mean by "better precision" (better than what)? Are you referring to integration with additional annotations like `@Raw`? Otherwise I would think that difference in precision vs. Eclipse would mostly be a matter of bugs that one tool or the other could have (and can hopefully fix). – Stephan Herrmann Aug 26 '16 at 15:28
  • Re IDE integration: Eclipse's analysis is part of ecj, the compiler that is integrated in the Eclipse IDE. This does not prevent anybody from using ecj (and its analysis) outside the IDE. – Stephan Herrmann Aug 26 '16 at 15:34
  • Better precision: Eclipse's nullness analysis is missing many features that the Checker Framework supports, such as handling of map keys, partially-initialized objects, method pre- and post-conditions, and much more. These are essential for practical verification of real-world code without poor precision. These features aren't as important if you are interested just in bug-finding instead of verification. – mernst Aug 26 '16 at 20:39
  • can we agree that the added precision of the Checker Framework essentially results from supporting more annotations in addition to `@NonNull` and `@Nullable`? I think this information should be added since the question only mentioned these two basic annotations. – Stephan Herrmann Aug 27 '16 at 16:53
  • This is a very informative and complete answer, explaining what parameters should be looked at to differentiate between such tools and applies them to the specified ones. Thank you. – user1803551 Aug 27 '16 at 23:18
  • @StephanHerrmann I filtered for solutions that use *at least* `@NonNull` and `@Nullable`, so things like Lombok are "disqualified". If anything support more than those 2 it's fine. However, I would have liked to see the customary disclaimer at the end of the answer since the answerer works on/for Checker Framework. – user1803551 Aug 27 '16 at 23:25