8

I have a 500K row spark DataFrame that lives in a parquet file. I'm using spark 2.0.0 and the SparkR package inside Spark (RStudio and R 3.3.1), all running on a local machine with 4 cores and 8gb of RAM.

To facilitate construction of a dataset I can work on in R, I use the collect() method to bring the spark DataFrame into R. Doing so takes about 3 minutes, which is far longer than it'd take to read an equivalently sized CSV file using the data.table package.

Admittedly, the parquet file is compressed and the time needed for decompression could be part of the issue, but I've found other comments on the internet about the collect method being particularly slow, and little in the way of explanation.

I've tried the same operation in sparklyr, and it's much faster. Unfortunately, sparklyr doesn't have the ability to do date path inside joins and filters as easily as SparkR, and so I'm stuck using SparkR. In addition, I don't believe I can use both packages at the same time (i.e. run queries using SparkR calls, and then access those spark objects using sparklyr).

Does anyone have a similar experience, an explanation for the relative slowness of SparkR's collect() method, and/or any solutions?

MichaelChirico
  • 33,841
  • 14
  • 113
  • 198
Wil Van Cleve
  • 91
  • 1
  • 4

2 Answers2

2

@Will

I don't know whether the following comment actually answers your question or not but Spark does lazy operations. All the transformations done in Spark (or SparkR) doesn't really create any data they just create a logical plan to follow.

When you run Actions like collect, it has to fetch data directly from source RDDs (assuming you haven't cached or persisted data).

If your data is not large enough and can be handled by local R easily then there is no need for going with SparkR. Other solution can be to cache your data for frequent uses.

Mohit Bansal
  • 131
  • 9
  • The 500K line example is only one example, and is drawn from tables with 300M rows. Spark is required to make this work in my setup, but the slowness of moving data between Spark and R is a major slowup. – Wil Van Cleve Sep 20 '16 at 21:01
1

Short: Serialization/deserialization is very slow. See for example post on my blog http://dsnotes.com/articles/r-read-hdfs However it should be equally slow in both sparkR and sparklyr.

Dmitriy Selivanov
  • 4,545
  • 1
  • 22
  • 38