0

I have a large table file (around 2 GB) that holds a distance matrix that is indexed by its first column. Its rows look something like

A 0 1.2 1.3 ...
B 1.2 0 3.5 ...
C 1.5 0 4.5 ...

However, I only need to keep a small subset of the rows. If I'm given a list of the indices that I need to keep, what is the best and fastest way to read this file into a pandas dataframe. Right now, I am using

distance_matrix = pd.read_table("hla_distmat.txt", header = None, index_col = 0)[columns_to_keep]

to read in the file, but this is running into memory issues with the read_table command. Is there a faster and more memory efficient way to do this? Thanks.

Alex
  • 3,946
  • 11
  • 38
  • 66

1 Answers1

1

You need usecols parameter if need filter columns and skiprows for filter rows, you have to specify which column has to be removed by list or range or np.array:

distance_matrix = pd.read_table("hla_distmat.txt", 
                                 header = None, 
                                 index_col = 0, 
                                 usecols=[columns_to_keep],
                                 skiprows = range(10, 100))

Sample: (in real data omit sep parameter, sep='\t' is by default in read_table)

import pandas as pd
import numpy as np 
from pandas.compat import StringIO

temp=u"""0;119.02;0.0
1;121.20;0.0
3;112.49;0.0
4;113.94;0.0
5;114.67;0.0
6;111.77;0.0
7;117.57;0.0
6648;0.00;420.0
6649;0.00;420.0
6650;0.00;420.0"""
#after testing replace 'StringIO(temp)' to 'filename.csv'

columns_to_keep = [0,1]

df = pd.read_table(StringIO(temp), 
                   sep=";", 
                   header=None,
                   index_col=0, 
                   usecols=columns_to_keep,
                   skiprows = range(5, 100))
print (df)
        1
0        
0  119.02
1  121.20
3  112.49
4  113.94
5  114.67

More general solution with numpy.setdiff1d:

#if index_col = 0 always need first column (0)
columns_to_keep = [0,1]
#for keep second, third, fifth row
rows_to_keep = [1,2,4]
#estimated row count or use solution from http://stackoverflow.com/q/19001402/2901002
max_rows = 100

df = pd.read_table(StringIO(temp), 
                   sep=";", 
                   header=None,
                   index_col=0, 
                   usecols=columns_to_keep,
                   skiprows = np.setdiff1d(np.arange(max_rows), np.array(rows_to_keep)))
print (df)
        1
0        
1  121.20
3  112.49
5  114.67
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252