8

Once the CSV is loaded via read.csv, it's fairly trivial to use multicore, segue etc to play around with the data in the CSV. Reading it in, however, is quite the time sink.

Realise it's better to use mySQL etc etc.

Assume the use of an AWS 8xl cluster compute instance running R2.13

Specs as follows:

Cluster Compute Eight Extra Large specifications:
88 EC2 Compute Units (Eight-core 2 x Intel Xeon)
60.5 GB of memory
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)

Any thoughts / ideas much appreciated.

n.e.w
  • 1,128
  • 10
  • 23

3 Answers3

5

Going parallel might not be needed if you use fread in data.table.

library(data.table)
dt <- fread("myFile.csv")

A comment to this question illustrates its power. Also here's an example from my own experience:

d1 <- fread('Tr1PointData_ByTime_new.csv')
Read 1048575 rows and 5 (of 5) columns from 0.043 GB file in 00:00:09

I was able to read in 1.04 million rows in under 10s!

Community
  • 1
  • 1
Richard Erickson
  • 2,568
  • 8
  • 26
  • 39
  • 1
    Hi, is it worth fread in parallel when multiple file? is it because of disk access limitation? – Boris Sep 01 '16 at 09:49
  • Hi Borris, I would suggest you post a new question with your problem. The answer depends upon how much memory you need, the size of your files, and what you're trying to do. Also, are you memory or CPU limited? – Richard Erickson Sep 01 '16 at 13:41
4

What you could do is use scan. Two of its input arguments could prove to be interesting: n and skip. You just open two or more connections to the file and use skip and n to select the part you want to read from the file. There are some caveats:

  • At some stage disk i/o might prove the bottle neck.
  • I hope that scan does not complain when opening multiple connections to the same file.

But you could give it a try and see if it gives a boost to your speed.

Paul Hiemstra
  • 59,984
  • 12
  • 142
  • 149
4

Flash or conventional HD storage? If the latter, then if you don't know where the file is on the drives, and how it's split, it's very hard to speed things up because multiple simultaneous reads will not be faster than one streamed read. It's because of the disk, not the CPU. There's no way to parallelize this without starting at the storage level of the file.

If it's flash storage then a solution like Paul Hiemstra's might help since good flash storage can have excellent random read performance, close to sequential. Try it... but if it's not helping you know why.

Also... a fast storage interface doesn't necessary mean the drives can saturate it. Have you run performance testing on the drives to see how fast they really are?

John
  • 23,360
  • 7
  • 57
  • 83