24

Why does the rdd.sample() function on Spark RDD return a different number of elements even though the fraction parameter is the same? For example, if my code is like below:

val a = sc.parallelize(1 to 10000, 3)
a.sample(false, 0.1).count

Every time I run the second line of the code it returns a different number not equal to 1000. Actually I expect to see 1000 every time although the 1000 elements might be different. Can anyone tell me how I can get a sample with the sample size exactly equal to 1000? Thank you very much.

Metropolis
  • 2,018
  • 1
  • 19
  • 36
Carter
  • 1,563
  • 8
  • 23
  • 32

3 Answers3

46

If you want an exact sample, try doing

a.takeSample(false, 1000)

But note that this returns an Array and not an RDD.

As for why the a.sample(false, 0.1) doesn't return the same sample size: it's because spark internally uses something called Bernoulli sampling for taking the sample. The fraction argument doesn't represent the fraction of the actual size of the RDD. It represent the probability of each element in the population getting selected for the sample, and as wikipedia says:

Because each element of the population is considered separately for the sample, the sample size is not fixed but rather follows a binomial distribution.

And that essentially means that the number doesn't remain fixed.

If you set the first argument to true, then it will use something called Poisson sampling, which also results in a non-deterministic resultant sample size.

Update

If you want stick with the sample method, you can probably specify a larger probability for the fraction param and then call take as in:

a.sample(false, 0.2).take(1000)

This should, most of the time, but not necessarily always, result in the sample size of 1000. This could work if you have a large enough population.

Bhashit Parikh
  • 3,121
  • 23
  • 24
4

Another way can be to first takeSample and then make RDD. This might be slow with large datasets.

sc.makeRDD(a.takeSample(false, 1000, 1234))
Laeeq
  • 357
  • 1
  • 4
  • 15
0

For sample count that is off by not more than the number of partitions (possibly requiring balanced partitions):

  • Write your own custom RDD.compute rather than using RDD.sample.
  • Number desired per partition is total/numParts; for each partition proceed as follows.
  • At outset use fraction as probablity of yielding vs skipping.
  • Upon reaching the desired count, unconditionally skip the rest.
  • If running low, upon needing all of the rest, unconditionally yield all the rest.

I have not done any rigorous analysis of the randomness of such, but for applications not requiring such level of rigor, may be pragmatically useful.

Randall Whitman
  • 411
  • 2
  • 13