I have this dataset of financial transactions, its pretty big but small enough to keep in memory..
R> str(trans)
'data.frame': 130000000 obs. of 5 variables:
$ id : int 5 5 5 5 6 11 11 11 11 11 ...
$ kod : int 2 3 2 3 38 2 3 6 7 6 ...
$ ar : int 329 329 330 330 7 329 329 329 329 329 ...
$ belopp: num 1531 -229.3 324 -48.9 0 ...
$ datum : int 36976 36976 37287 37287 37961 36976 36976 37236 37236 37281 ...
I need to loop through it extracting the transactions for each unique id, and do a bunch of calculations. The trouble is that the subsetting of the dataset is way too slow..
R> system.time(
+ sub <- trans[trans$id==15,]
+ )
user system elapsed
7.80 0.55 8.36
R> system.time(
+ sub <- subset(trans, id == 15)
+ )
user system elapsed
8.49 1.05 9.53
As there are about 10m unique id's in this dataset, such a loop would take forever, any ideas how I might speed it up?
EDIT I've dabbled with ´data.tables´, indexing and sorting with not much luck at all..
library(data.table)
trans2 <- as.data.table(trans)
trans2 <- trans2[order(id)]
trans2 <- setkey(trans2, id)
R> system.time(
+ sub <- trans2[trans2$id==15,]
+ )
user system elapsed
7.33 1.08 8.41
R> system.time(
+ sub <- subset(trans2, id == 15)
+ )
user system elapsed
8.66 1.12 9.78
EDIT2 Awesome.
R> system.time(
+ sub <- trans2[J(15)]
+ )
user system elapsed
0 0 0