2

Imagine a sequence of java.io.File objects. The sequence is not in any particular order, it gets populated after a directory traversal. The names of the files can be like this:

/some/file.bin
/some/other_file_x1.bin
/some/other_file_x2.bin
/some/other_file_x3.bin
/some/other_file_x4.bin
/some/other_file_x5.bin
...
/some/x_file_part1.bin
/some/x_file_part2.bin
/some/x_file_part3.bin
/some/x_file_part4.bin
/some/x_file_part5.bin
...
/some/x_file_part10.bin

Basically, I can have 3 types of files. First type is the simple ones, which only have a .bin extension. The second type of file is the one formed from _x1.bin till _x5.bin. And the third type of file can be formed of 10 smaller parts, from _part1 till _part10. I know the naming may be strange, but this is what I have to work with :)

I want to group the files together ( all the pieces of a file should be processed together ), and I was thinking of using parallel arrays to do this. The thing I'm not sure about is how can I perform the reduce/acumulation part, since all the threads will be working on the same array.

val allBinFiles = allBins.toArray // array of java.io.File

I was thinking of handling something like that:

val mapAcumulator = java.util.Collections.synchronizedMap[String,ListBuffer[File]](new java.util.HashMap[String,ListBuffer[File]]())

allBinFiles.par.foreach { file =>
   file match {
      // for something like /some/x_file_x4.bin nameTillPart will be /some/x_file
      case ComposedOf5Name(nameTillPart) => {
          mapAcumulator.getOrElseUpdate(nameTillPart,new ListBuffer[File]()) += file
      }
      case ComposedOf10Name(nameTillPart) => {
          mapAcumulator.getOrElseUpdate(nameTillPart,new ListBuffer[File]()) += file
      }
      // simple file, without any pieces
      case _ => {
          mapAcumulator.getOrElseUpdate(file.toString,new ListBuffer[File]()) += file
      }
   }
}

I was thinking of doing it like I've shown in the above code. Having extractors for the files, and using part of the path as key in the map. Like for example, /some/x_file can hold as values /some/x_file_x1.bin to /some/x_file_x5.bin. I also think there could be a better way of handling this. I would be interested in your opinions.

Geo
  • 93,257
  • 117
  • 344
  • 520
  • Is this something that has to be run once or do you need to do it on a regular basis? Will the files actually be read at some point? If so, the task might be IO-bound and the optimization(parallelization) at least premature, if not outright unnecessary. – Kim Stebel May 11 '11 at 08:12
  • The files will be partially read at a later point, some processing will be done based on their content, and a lot of compressing will happen. I also plan to do the compressing in parallel. – Geo May 11 '11 at 08:15

1 Answers1

1

The alternative is to use groupBy:

val mp = allBinFiles.par.groupBy {
  case ComposedOf5Name(x) => x
  case ComposedOf10Name(x) => x
  case f => f.toString
}

This will return a parallel map of parallel arrays of files (ParMap[String, ParArray[File]]). If you want a sequential map of sequential sequences of files from this point:

val sqmp = mp.map(_.seq).seq

To ensure that the parallelism kicks in, make sure you have enough elements in you parallel array (10k+).

axel22
  • 32,045
  • 9
  • 125
  • 137
  • The size depends, it's usually several thousand files, but I don't know if it's 10k+ all the time. Why/how is that 10k limit defined? – Geo May 11 '11 at 08:18
  • It's not a hard limit, just a rule of the thumb. This depends on the expensiveness of your higher-order operator. Usually, when the higher-order operator is very cheap, at least 10k elements are needed to notice the speedup, for most operations. This is due to the underlying fork/join framework and its overheads (switching thread contexts, synchronizing, etc.). In your case, where you have several extractors searching strings in your HOP, several thousands might be enough. I would suggest that you try to benchmark it and see if you get a speedup. – axel22 May 11 '11 at 08:22
  • By the way, the groupBy, will group all of the 5 part files together, and all of the 10 part files together, in one array. I want to group them separately, based on the name except `_part`/`_x`. – Geo May 12 '11 at 08:40