When I'm using spark, I sometimes run into one huge file in a HIVE table, and I sometimes am trying to process many smaller files in a HIVE table.
I understand that when tuning spark jobs, how it works depends on whether or not the files are splittable. In this page from cloudera, it says that we should be aware of whether or not the files are splittable:
...For example, if your data arrives in a few large unsplittable files...
How do I know if my file is splittable?
How do I know the number of partitions to use if the file is splittable ?
Is it better to err on the side of more partitions if I'm trying to write a piece of code that will work on any HIVE table, i.e. either of the two cases described above?