4

I have a 5GB csv with 2 million rows. The header are comma separated strings and each row are comma separated doubles with no missing or corrupted data. It is rectangular.

My objective is to read a random 10% (with or without replacement, doesn't matter) of the rows into RAM as fast as possible. An example of a slow solution (but faster than read.csv) is to read in the whole matrix with fread and then keep a random 10% of the rows.

require(data.table)
X <- data.matrix(fread('/home/user/test.csv')) #reads full data.matix
X <- X[sample(1:nrow(X))[1:round(nrow(X)/10)],] #sample random 10%

However I'm looking for the fastest possible solution (this is slow because I need to read the whole thing first, then trim it after).

The solution deserving of a bounty will give system.time() estimates of different alternatives.

Other:

  • I am using Linux
  • I don't need exactly 10% of the rows. Just approximately 10%.
user2763361
  • 3,789
  • 11
  • 45
  • 81
  • How many times do you need to sample 10% of the rows? – Joshua Ulrich Aug 11 '14 at 13:09
  • Are you working in a Linux environement? ```shuf -n N test.csv > output.csv``` should work for a random number of observations N times. It seems to be quite fast. http://stackoverflow.com/questions/9245638/select-random-lines-from-a-file-in-bash – Mike.Gahan Aug 11 '14 at 13:11
  • Isn't shuf going to have to read the whole thing into memory (or memory-map it) and so be slooow for a 5GB file? Worth a test... – Spacedman Aug 11 '14 at 13:28
  • Do you need *exactly* 10% or is having each line with a probability of 1/10 acceptable? The latter is easy to do with an `awk` command line filter, but might not give you exactly 1/10th of the lines. – Spacedman Aug 11 '14 at 13:34
  • @Spacedman Not exactly 10%. 1/10 probability is perfectly fine. – user2763361 Aug 11 '14 at 13:43
  • Good question @Spacedman. I am not sure how `shuf` scales. I will be interested on any results that come. – Mike.Gahan Aug 11 '14 at 14:20
  • I'm impressed. `shuf -n 10000 big.csv` where `big.csv` is 1M lines takes 0.2s. Suspect it scans once for line ends and then seeks about to output the lines. Hmm I think it mmaps it and works in memory. Must find the source... – Spacedman Aug 11 '14 at 14:41

3 Answers3

7

I think this should work pretty quickly, but let me know since I have not tried with big data yet.

write.csv(iris,"iris.csv")

fread("shuf -n 5 iris.csv")

    V1  V2  V3  V4  V5         V6
1:  37 5.5 3.5 1.3 0.2     setosa
2:  88 6.3 2.3 4.4 1.3 versicolor
3:  84 6.0 2.7 5.1 1.6 versicolor
4: 125 6.7 3.3 5.7 2.1  virginica
5: 114 5.7 2.5 5.0 2.0  virginica

This takes a random sample of N=5 for the iris dataset.

To avoid the chance of using the header row again, this might be a useful modification:

fread("tail -n+2 iris.csv | shuf -n 5", header=FALSE)

Mike.Gahan
  • 4,565
  • 23
  • 39
  • Gives me a "file not found: shuf -n 5 iris.csv". Its looking for a file with that name, rather than running that command and piping it. Have I got an old data.table package? – Spacedman Aug 11 '14 at 13:32
  • @Spacedman Looking at the first line of his MWE, he is saving to `iris.csv` *after* having loaded it in in a piece of code that he didn't copy and paste into the example. – user2763361 Aug 11 '14 at 13:47
  • `data.table` only started allowing statements like this as of version 1.9.2 (At least I believe this is the case. I am using data.table 1.9.3) – Mike.Gahan Aug 11 '14 at 13:59
  • @user2763361 `iris` does not need to be loaded in (since it is part of the base R package). It has already been loaded into the environment for everyone. – Mike.Gahan Aug 11 '14 at 14:01
  • @Mike.Gahan Works for me now I've upgraded data.table! – Spacedman Aug 11 '14 at 14:50
  • This is fastest. For selecting 160k from 1.6M line file, my `awk` takes 0.4s (and selects approx 160k lines), `shuf` takes 0.3s, and the `sqldf` solution takes 20s. – Spacedman Aug 11 '14 at 15:07
  • Is there any way to have a number index of which rows were sampled be returned by the fread call? e.g. Currently the rownames are 1:, 2:, 3:, 4:, 5:, ... but if we sampled the 5th, 10th, 25th, 88th, 109th rows, is it possible to get these indices? – user2763361 Aug 13 '14 at 00:28
  • Also is there any way to do a similar thing when we know the row indices that we want to sample in advance? – user2763361 Aug 13 '14 at 00:34
  • Also what do I do about headers? Say I want to shuffle the file like this but ignore the header (since I only want to get numbers)? – user2763361 Aug 13 '14 at 00:57
  • All very good questions. It looks like `shuf` already ignores headers (at least it seems that way from working with the iris file). As for getting row indices, there has to be a way to do this in `bash` and then extract the file. But I do not know how to do this. – Mike.Gahan Aug 13 '14 at 12:51
6

Here's a file with 100000 lines in it like this:

"","a","b","c"
"1",0.825049088569358,0.556148858508095,0.591679535107687
"2",0.161556158447638,0.250450366642326,0.575034103123471
"3",0.676798462402076,0.0854280597995967,0.842135070590302
"4",0.650981109589338,0.204736212035641,0.456373531138524
"5",0.51552157686092,0.420454133534804,0.12279288447462

$ wc -l d.csv 
100001 d.csv

So that's 100000 lines plus a header. We want to keep the header and sample each line if a random number from 0 to 1 is greater than 0.9.

$ awk 'NR==1 {print} ; rand()>.9 {print}' < d.csv >sample.csv

check:

$ head sample.csv 
"","a","b","c"
"12",0.732729186303914,0.744814146542922,0.199768838472664
"35",0.00979996216483414,0.633388962829486,0.364802648313344
"36",0.927218825090677,0.730419414117932,0.522808947600424
"42",0.383301998255774,0.349473554175347,0.311060158303007

and it has 10027 lines:

$ wc -l sample.csv 
10027 sample.csv

This took 0.033s of real time on my 4-yo box, probably the HD speed is the limiting factor here. It should scale linearly since the file is being dealt with strictly line-by-line.

You then read in sample.csv using read.csv or fread as desired:

> s = fread("sample.csv")
Spacedman
  • 92,590
  • 12
  • 140
  • 224
3

You could use sqldf::read.csv.sql and an SQL command to pull the data in:

library(sqldf)
write.csv(iris, "iris.csv", quote = FALSE, row.names = FALSE) # write a csv file to test with
read.csv.sql("iris.csv","SELECT * FROM file ORDER BY RANDOM() LIMIT 10")
   Sepal_Length Sepal_Width Petal_Length Petal_Width    Species
1           6.3         2.8          5.1         1.5  virginica
2           4.6         3.1          1.5         0.2     setosa
3           5.4         3.9          1.7         0.4     setosa
4           4.9         3.0          1.4         0.2     setosa
5           5.9         3.0          4.2         1.5 versicolor
6           6.6         2.9          4.6         1.3 versicolor
7           4.3         3.0          1.1         0.1     setosa
8           4.8         3.4          1.9         0.2     setosa
9           6.7         3.3          5.7         2.5  virginica
10          5.9         3.2          4.8         1.8 versicolor

It doesn't calculate the 10% for you, but you can choose the absolute limit of rows to return.

James
  • 65,548
  • 14
  • 155
  • 193