22

I would like to sort my data by a given column, specifically p-values. However, the issue is that I am not able to load my entire data into memory. Thus, the following doesn't work or rather works for only small datasets.

data = data.sort(columns=["P_VALUE"], ascending=True, axis=0)

Is there a quick way to sort my data by a given column that only takes chunks into account and doesn't require loading entire datasets in memory?

user1867185
  • 407
  • 1
  • 4
  • 8
  • where is your data stored? how big? what's memory constraint? – Jeff Jan 22 '14 at 01:09
  • It's couple of TB file and the maximum available memory is about 250 GBs on the cluster. – user1867185 Jan 22 '14 at 02:51
  • and how so you store it; assume hdf? – Jeff Jan 22 '14 at 03:30
  • 1
    see http://pandas.pydata.org/pandas-docs/dev/io.html, and http://pandas.pydata.org/pandas-docs/dev/cookbook.html#hdfstore; hdf5 is an extremely efficient format for fast and space efficient storage snd retrieval. sorting is non-trivial in a chunked manner but certainly possible – Jeff Jan 22 '14 at 04:30
  • Me and some co-workers came across this same problem. What we ended up doing was doing a parallel process split on the file into smaller 1 million row file chunks. Then depending on how you are sorting, ou can come up with some sort of directory scheme to "sort" the files into. If it's transaction data, you could use AWK or pandas to parse out each 1 million row chunk into a relative year_quarter directory/file, and then you can sort on these aggregated files. If you need the data in one file, then at the end you can just stack them back together in order. Good luck! – Ryan G Jan 22 '14 at 20:06
  • This seems to be related with [“Large data” work flows using pandas](http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas). @JoãoAbrantes It seems you are looking for a code solution. Have you tried the solution posted in that thread? – iled Feb 01 '16 at 17:33
  • @iled that code is not specific for the sorting problem – João Abrantes Feb 01 '16 at 17:37

5 Answers5

18

In the past, I've used Linux's pair of venerable sort and split utilities, to sort massive files that choked pandas.

I don't want to disparage the other answer on this page. However, since your data is text format (as you indicated in the comments), I think it's a tremendous complication to start transferring it into other formats (HDF, SQL, etc.), for something that GNU/Linux utilities have been solving very efficiently for the last 30-40 years.


Say your file is called stuff.csv, and looks like this:

4.9,3.0,1.4,0.6
4.8,2.8,1.3,1.2

Then the following command will sort it by the 3rd column:

sort --parallel=8 -t . -nrk3 stuff.csv

Note that the number of threads here is set to 8.


The above will work with files that fit into the main memory. When your file is too large, you would first split it into a number of parts. So

split -l 100000 stuff.csv stuff

would split the file into files of length at most 100000 lines.

Now you would sort each file individually, as above. Finally, you would use mergesort, again through (waith for it...) sort:

sort -m sorted_stuff_* > final_sorted_stuff.csv

Finally, if your file is not in CSV (say it is a tgz file), then you should find a way to pipe a CSV version of it into split.

Ami Tavory
  • 74,578
  • 11
  • 141
  • 185
  • isn't it necessary to also specify the order of sorting for the mergesort at the end, i.e. `sort -nrk3 -m sorted_stuff_* > final_sorted_stuff.csv`? Without this, won't sort default to, I believe, just sorting based on the first column and then proceeding rightwards? – Michael Ohlrogge May 26 '16 at 19:25
8

As I referred in the comments, this answer already provides a possible solution. It is based on the HDF format.

About the sorting problem, there are at least three possible ways to solve it with that approach.

First, you can try to use pandas directly, querying the HDF-stored-DataFrame.

Second, you can use PyTables, which pandas uses under the hood.

Francesc Alted gives a hint in the PyTables mailing list:

The simplest way is by setting the sortby parameter to true in the Table.copy() method. This triggers an on-disk sorting operation, so you don't have to be afraid of your available memory. You will need the Pro version for getting this capability.

In the docs, it says:

sortby : If specified, and sortby corresponds to a column with an index, then the copy will be sorted by this index. If you want to ensure a fully sorted order, the index must be a CSI one. A reverse sorted copy can be achieved by specifying a negative value for the step keyword. If sortby is omitted or None, the original table order is used

Third, still with PyTables, you can use the method Table.itersorted().

From the docs:

Table.itersorted(sortby, checkCSI=False, start=None, stop=None, step=None)

Iterate table data following the order of the index of sortby column. The sortby column must have associated a full index.


Another approach consists in using a database in between. The detailed workflow can be seen in this IPython Notebook published at plot.ly.

This allows to solve the sorting problem, along with other data analyses that are possible with pandas. It looks like it was created by the user chris, so all the credit goes to him. I am copying here the relevant parts.

Introduction

This notebook explores a 3.9Gb CSV file.

This notebook is a primer on out-of-memory data analysis with

  • pandas: A library with easy-to-use data structures and data analysis tools. Also, interfaces to out-of-memory databases like SQLite.
  • IPython notebook: An interface for writing and sharing python code, text, and plots.
  • SQLite: An self-contained, server-less database that's easy to set-up and query from Pandas.
  • Plotly: A platform for publishing beautiful, interactive graphs from Python to the web.

Requirements

import pandas as pd
from sqlalchemy import create_engine # database connection 

Import the CSV data into SQLite

  1. Load the CSV, chunk-by-chunk, into a DataFrame
  2. Process the data a bit, strip out uninteresting columns
  3. Append it to the SQLite database
disk_engine = create_engine('sqlite:///311_8M.db') # Initializes database with filename 311_8M.db in current directory

chunksize = 20000
index_start = 1

for df in pd.read_csv('311_100M.csv', chunksize=chunksize, iterator=True, encoding='utf-8'):

    # do stuff   

    df.index += index_start

    df.to_sql('data', disk_engine, if_exists='append')
    index_start = df.index[-1] + 1

Query value counts and order the results

Housing and Development Dept receives the most complaints

df = pd.read_sql_query('SELECT Agency, COUNT(*) as `num_complaints`'
                       'FROM data '
                       'GROUP BY Agency '
                       'ORDER BY -num_complaints', disk_engine)

Limiting the number of sorted entries

What's the most 10 common complaint in each city?

df = pd.read_sql_query('SELECT City, COUNT(*) as `num_complaints` '
                            'FROM data '
                            'GROUP BY `City` '
                   'ORDER BY -num_complaints '
                   'LIMIT 10 ', disk_engine)

Possibly related and useful links

Community
  • 1
  • 1
iled
  • 2,142
  • 3
  • 31
  • 43
3

Blaze might be the tool for you with the ability to work with pandas and csv files out of core. http://blaze.readthedocs.org/en/latest/ooc.html

import blaze
import pandas as pd
d = blaze.Data('my-large-file.csv')
d.P_VALUE.sort()  # Uses Chunked Pandas

For faster processing, load it into a database first which blaze can control. But if this is a one off and you have some time then the posted code should do it.

Back2Basics
  • 7,406
  • 2
  • 32
  • 45
1

If your csv file contains only structured data, I would suggest approach using only linux commands.

Assume csv file contains two columns, COL_1 and P_VALUE:

map.py:

import sys
for line in sys.stdin:
    col_1, p_value = line.split(',')
    print "%f,%s" % (p_value, col_1)

then the following linux command will generate the csv file with p_value sorted:

cat input.csv | ./map.py | sort > output.csv

If you're familiar with hadoop, using the above map.py also adding a simple reduce.py will generate the sorted csv file via hadoop streaming system.

ZFY
  • 135
  • 12
0

Here is my Honest sugg./ Three options you can do.

  1. I like Pandas for its rich doc and features but I been suggested to use NUMPY as it feel faster comparatively for larger datasets. You can think of using other tools as well for easier job.

  2. In case you are using Python3, you can break your big data chunk into sets and do Congruent Threading. I am too lazy for this and it does nt look cool, you see Panda, Numpy, Scipy are build with Hardware design perspectives to enable multi threading I believe.

  3. I prefer this, this is easy and lazy technique acc. to me. Check the document at http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort.html

You can also use 'kind' parameter in your pandas-sort function you are using.

Godspeed my friend.

sam
  • 1,819
  • 1
  • 18
  • 30
  • 2
    I would like to ask you to show some references or examples on how "it feels faster". `pandas` is built on top of `numpy`. It is like a `numpy` data-analysis-flavoured version. Just do `df.values` to get a `numpy.array`. Also, `DataFrame.sort_values()` (the one linked is deprecated) uses `numpy.sort()`. See the code [here](https://github.com/pydata/pandas/blob/master/pandas/core/categorical.py). Of course, it may add some overhead, and it might be slightly faster to use `numpy`(cpu time, perhaps not in terms of programming time), in which cases you can easily access `numpy` objects. – iled Feb 04 '16 at 17:41
  • While Numpy is a great tool for working within problems that fit in ram Numpy handle the size of problems that he's talking about with the authors current machine. (this memory constraint wasn't in the original question. It showed up in the comments later) – Back2Basics Feb 07 '16 at 03:06
  • Thank you for clarity iled & Back2Basics. thanks guys. – sam Feb 08 '16 at 11:59