7

When I'm using spark, I sometimes run into one huge file in a HIVE table, and I sometimes am trying to process many smaller files in a HIVE table.

I understand that when tuning spark jobs, how it works depends on whether or not the files are splittable. In this page from cloudera, it says that we should be aware of whether or not the files are splittable:

...For example, if your data arrives in a few large unsplittable files...

  1. How do I know if my file is splittable?

  2. How do I know the number of partitions to use if the file is splittable ?

  3. Is it better to err on the side of more partitions if I'm trying to write a piece of code that will work on any HIVE table, i.e. either of the two cases described above?

Ravindra babu
  • 37,698
  • 11
  • 250
  • 211
makansij
  • 9,303
  • 37
  • 105
  • 183

1 Answers1

10

Considering Spark accepts Hadoop input files, have a look at below image.

Only bzip2 formatted files are splitable and other formats like zlib, gzip, LZO, LZ4 and Snappy formats are not splitable.

Regarding your query on partition, partition does not depend on file format you are going to use. It depends on content in the file - Values of partitioned column like date etc.

enter image description here

EDIT 1: Have a look at this SE question and this working code on Spark reading zip file.

JavaPairRDD<String, String> fileNameContentsRDD = javaSparkContext.wholeTextFiles(args[0]);
        JavaRDD<String> lineCounts = fileNameContentsRDD.map(new Function<Tuple2<String, String>, String>() {
            @Override
            public String call(Tuple2<String, String> fileNameContent) throws Exception {
                String content = fileNameContent._2();
                int numLines = content.split("[\r\n]+").length;
                return fileNameContent._1() + ":  " + numLines;
            }
        });
        List<String> output = lineCounts.collect();

EDIT 2:

LZO files can be splittable.

LZO files can be split as long as the splits occur on block boundaries

Refer to this article for more details.

Community
  • 1
  • 1
Ravindra babu
  • 37,698
  • 11
  • 250
  • 211
  • So, if the default hadoop files aren't `splittable`, then how does `sc.textFile()` still create an RDD of lines on hdfs files? Or does it not? How do I determine my file format? – makansij Dec 10 '15 at 19:01
  • 1
    Unsplitable does not mean that file is not going to be processed. It meant that data locality has been lost. If a 1 GB compressed unsplitable file has been stored in 8 blocks of 8 different nodes, only one mapper is created to process complete unsplitable file. – Ravindra babu Dec 10 '15 at 19:09
  • yeah, I know that the file will still get processed. But, in _spark_, can an RDD still split the file up line-by-line and process it, or will it process it all as one? Essentially how does the word `splittable` affect the way spark processes the file? – makansij Dec 10 '15 at 19:39
  • 1
    Splitable files allow processing to be distributed over multiple worker nodes. For non-splitable, I have updated the answer. – Ravindra babu Dec 11 '15 at 07:54
  • I'm not so hot on Java, so I'm slowly understanding that SE question you posted a link to. But, in [this](http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-1/) link, it suggests `repartition`ing an RDD of un-splittable files. So, you must be saying that the only the initial RDD is affected by the `non-splitability`. After that, the RDD can be `repartition`ed. Correct? – makansij Dec 12 '15 at 23:08
  • 1
    It is wrong that LZO files are not splittable. They are. You just have to index them. See hadoop-lzo project. – markhor Oct 21 '16 at 10:05
  • Snappy is splittable as long as it's applied after containerization – GdD Apr 21 '17 at 09:27