Using the Apache driver is fine if your program runs on a host with all the Hadoop libs already installed. Otherwise you will have to drag a smorgasbord of dependencies i.e.
- hive-jdbc*-standalone.jar (the large one)
- hadoop-common*.jar
- hadoop-auth*.jar (for Kerberos only)
- commons-configuration*.jar
- the SLF4J family and friends
Packaging all these dependencies in your own JAR will probably result in a massive, cluttered piece of shoftware (God, how Maven is misused nowadays). Plus, you may have compatibility issue because newer clients are not compatible with older servers. "Not compatible" meaning "unable to initialize connection with Thrift server".
For a standalone install the Cloudera driver may be a good solution - registration just means leaving one of your "junk" e-mails to get a couple of marketing messages (and you can un-subscribe then). Although I admit I've never used it on a non-Cloudera cluster.