2

So people have been having problems compressing the output of Scalding Jobs including myself. After googling I get the odd hiff of an answer in a some obscure forum somewhere but nothing suitable for peoples copy and paste needs.

I would like an output like Tsv, but writes compressed output.

samthebest
  • 30,803
  • 25
  • 102
  • 142

2 Answers2

3

Anyway after much faffification I managed to write a TsvCompressed output which seems to do the job (you still need to set the hadoop job system configuration properties, i.e. set compress to true, and set the codec to something sensible or it defaults to crappy deflate)

import com.twitter.scalding._
import cascading.tuple.Fields
import cascading.scheme.local
import cascading.scheme.hadoop.{TextLine, TextDelimited}
import cascading.scheme.Scheme
import org.apache.hadoop.mapred.{OutputCollector, RecordReader, JobConf}

case class TsvCompressed(p: String) extends FixedPathSource(p) with DelimitedSchemeCompressed

trait DelimitedSchemeCompressed extends Source {
  val types: Array[Class[_]] = null

  override def localScheme = new local.TextDelimited(Fields.ALL, false, false, "\t", types)

  override def hdfsScheme = {
    val temp = new TextDelimited(Fields.ALL, false, false, "\t", types)
    temp.setSinkCompression(TextLine.Compress.ENABLE)
    temp.asInstanceOf[Scheme[JobConf,RecordReader[_,_],OutputCollector[_,_],_,_]]
  }
}
samthebest
  • 30,803
  • 25
  • 102
  • 142
1

I have also small project showing how to achieve compressed output from Tsv. WordCount-Compressed.

Scalding was setting null to the Cascading TextDelimeted parameter which disables compression.

morazow
  • 251
  • 1
  • 8
  • Thanks, I took a look. What does `mapreduce.output.fileoutputformat.compress.type` `BLOCK` do? – samthebest Jun 18 '14 at 14:32
  • 1
    It is one of the compression types (RECORD, BLOCK, NONE) from Hadoop. More info [here](https://hadoop.apache.org/docs/r1.0.4/api/org/apache/hadoop/io/SequenceFile.CompressionType.html). Basically, instead of compressing each record individually it compresses in blocks. Block size should be defined in the Hadoop as well. – morazow Jun 25 '14 at 11:33