19

I have a data table with several social media users and his/her followers. The original data table has the following format:

X.USERID FOLLOWERS
1081     4053807021,2476584389,4713715543, ...

So each row contains a user together with his/her ID and a vector of followers (seperated by a comma). In total I have 24,000 unique user IDs together with 160,000,000 unique followers. I wish to convert my original table in the following format:

X.USERID          FOLLOWERS
1:     1081         4053807021
2:     1081         2476584389
3:     1081         4713715543
4:     1081          580410695
5:     1081         4827723557
6:     1081 704326016165142528

In order to get this data table I used the following line of code (assume that my original data table is called dt):

uf <- dt[,list(FOLLOWERS = unlist(strsplit(x = FOLLOWERS, split= ','))), by = X.USERID]

However when I run this code on the entire dataset I get the following error:

negative length vectors are not allowed

According to this post on stack overflow (Negative number of rows in data.table after incorrect use of set ), it seems that I am bumping into the memory limits of the column in data.table. As a workaround, I ran the code in smaller blocks (per 10,000) and this seemed to work.

My question is: if I change my code can I prevent this error from occuring or am I bumping into the limits of R?

PS. I have a machine with 140gb RAM at my disposal, so physical memory space should not be the issue.

> memory.limit()
[1] 147446
Community
  • 1
  • 1
  • you may try to look for a replacement of `strsplit` as this is probably the least efficient part your query. – jangorecki Apr 25 '16 at 17:18
  • 4
    `stri_split` from the `stringi` package was about 3 times faster when I tested it on a fake data file with 100 IDs and 100,000 followers per ID. – eipi10 Apr 25 '16 at 22:27
  • the total number of "followings" would matter more than the unique followers... your 140 GB could have been blown up before you could even load the initial table – RolandASc Jan 31 '18 at 20:57

1 Answers1

1

This problem occurs when the number of rows in your dataset exceeds R's limit of 2^32-1. One of the ways to deal with this problem is to read your dataset in chunks (within a loop). It looks like your file is sorted by X.USERID field, so your chunks (when you read the file) should overlap by the number of unique followers to insure each user belongs to at least one chunk that contains all followers. The way you process this chunks would very much depend on what you need to do with your data.

Katia
  • 3,784
  • 1
  • 14
  • 27