69

I'm trying to read a fairly large CSV file with Pandas and split it up into two random chunks, one of which being 10% of the data and the other being 90%.

Here's my current attempt:

rows = data.index
row_count = len(rows)
random.shuffle(list(rows))

data.reindex(rows)

training_data = data[row_count // 10:]
testing_data = data[:row_count // 10]

For some reason, sklearn throws this error when I try to use one of these resulting DataFrame objects inside of a SVM classifier:

IndexError: each subindex must be either a slice, an integer, Ellipsis, or newaxis

I think I'm doing it wrong. Is there a better way to do this?

Blender
  • 289,723
  • 53
  • 439
  • 496
  • 3
    Incidentally, this wouldn't randomly shuffle correctly anyway - the problem is `random.shuffle(list(rows))`. `shuffle` alters the data it operates on, but when you call `list(rows)`, you make a copy of `rows` that gets altered and then thrown away - the underlying pandas Series, `rows`, is unchanged. One solution is to call `rows = list(rows)`, then `random.shuffle(rows)` and `data.reindex(rows)` after that. – spencer nelson Feb 20 '13 at 00:10

5 Answers5

81

What version of pandas are you using? For me your code works fine (i`m on git master).

Another approach could be:

In [117]: import pandas

In [118]: import random

In [119]: df = pandas.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))

In [120]: rows = random.sample(df.index, 10)

In [121]: df_10 = df.ix[rows]

In [122]: df_90 = df.drop(rows)

Newer version (from 0.16.1 on) supports this directly: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html

Wouter Overmeire
  • 65,766
  • 10
  • 63
  • 43
  • 7
    Another approach is to use `np.random.permuation` – Wes McKinney Sep 08 '12 at 22:37
  • 1
    @WesMcKinney: I notice that `np.random.permutation` would strip the column names from the DataFrame, because `np.random.permutation`. Is there a method in pandas that would shuffle the dataframe while retaining the column names? – hlin117 Mar 05 '15 at 20:03
  • 4
    @hlin df.loc[np.random.permutation(df.index)] will shuffle the dataframe and keep column names. – Wouter Overmeire Mar 06 '15 at 07:22
  • 1
    @Wouter Overmeire, I just tried this, and it looks like it might work fine for now, but it also gave me a deprecation warning. – szeitlin Apr 08 '15 at 17:01
  • `random.sample()` will cause `RuntimeError: maximum recursion depth exceeded while calling a Python object` if the sample length is too long. recommending `np.random.choice()` – redreamality Dec 15 '15 at 03:21
  • Using df.sample option, you could use a fraction of the sample instead of the raw value of number of rows. For e.g.: df.sample(frac=0.25) – Allohvk Dec 18 '20 at 06:00
79

I have found that np.random.choice() new in NumPy 1.7.0 works quite well for this.

For example you can pass the index values from a DataFrame and and the integer 10 to select 10 random uniformly sampled rows.

rows = np.random.choice(df.index.values, 10)
sampled_df = df.ix[rows]
dragoljub
  • 881
  • 7
  • 5
  • with ipython timeit it takes half of `random.sample` time.. awesome – gc5 Nov 11 '13 at 16:34
  • +1 for use of np.random.choice. Also, if you have a `pd.Series` of probabilities, `prob`, you can pick from the index as so: `np.random.choice(prob.index.values, p=prob.values)` – LondonRob Jan 22 '14 at 19:06
  • 39
    Don't forget to specify replace=False if you want sampling without replacement. Otherwise this method can potentially sample the same row multiple times. – Alexander Measure Jan 30 '14 at 03:55
  • if you'd like to sample N unique values of a column 'A' from df w/o replacement, I found the following useful: rand_Nvals = np.random.choice(list(set(df.A)), N, replace=False) – Quetzalcoatl Aug 25 '15 at 04:49
  • In my case, I wanted to *repeat* data -- i.e. take the list ['a','b','c'] and make this list 3,000 long (instead of 3 long). `random.sample` doesn't allow the result to be bigger than the input (`ValueError: Sample larger than population`) `np.random.choice` does allow the result to be bigger than the input. I might be describing a different problem than OP (who specifically says "sample" = smaller than population), but... – Nate Anderson Oct 28 '15 at 16:54
  • Update: pandas uses `iloc` in place of `ix` now. So you might get a Deprecation Error if you try the old command. – istewart Sep 14 '17 at 03:35
25

New in version 0.16.1:

sample_dataframe = your_dataframe.sample(n=how_many_rows_you_want)

doc here: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.sample.html

newmathwhodis
  • 3,209
  • 2
  • 24
  • 26
  • Once you've got your sample_dataframe, how do you subtract it from your_dataframe? – Chris Nielsen Jun 07 '17 at 19:30
  • @ChrisNielsen Are you asking so you can do cross validation? If so, I recommend http://scikit-learn.org/stable/modules/cross_validation.html as it gives you all your training and testing datasets (X_train, X_test, y_train, y_test) directly – newmathwhodis Jun 08 '17 at 19:13
15

Pandas 0.16.1 have a sample method for that.

hurrial
  • 484
  • 4
  • 9
6

If you're using pandas.read_csv you can directly sample when loading the data, by using the skiprows parameter. Here is a short article I've written on this - https://nikolaygrozev.wordpress.com/2015/06/16/fast-and-simple-sampling-in-pandas-when-loading-data-from-files/

Nikolay
  • 1,002
  • 11
  • 10