4

I have a 3000x300 matrix file (float). when I read and convert to float, I am getting float64, which is default in python. I tried numpy and map() to convert it to float32() but they both seem very inefficient.

my code:

x = open(readFrom, 'r').readlines()
y = [[float(i) for i in s.split()] for s in x]

time taken: 0:00:00.996000

numpy implementation:

x = open(readFrom, 'r').readlines()
y = [[np.float32(i) for i in s.split()] for s in x]

time taken: 0:00:06.093000

map()

x = open(readFrom, 'r').readlines()
y = [map(np.float32, s.split()) for s in x]

time taken: 0:00:05.474000

How can I convert to float32 very efficiently?

Thank you.

Update:

numpy.loadtxt() or numpy.genfromtxt() not working (giving memory error) for huge file. I have posted a question related to that and the approach I presented here works well for huge matrix file (50,000x5000). here is the question

Community
  • 1
  • 1
Maggie
  • 5,923
  • 8
  • 41
  • 56

1 Answers1

2

If memory is a problem, and if you know the size of the field ahead of time, you probably don't want to read the entire file in the first place. Something like this is probably more appropriate:

#allocate memory (np.empty would work too and be marginally faster, 
#                 but probably not worth mentioning).
a=np.zeros((3000,300),dtype=np.float32)  
with open(filename) as f:
    for i,line in enumerate(f):
        a[i,:]=map(np.float32,line.split()) 

from a couple quick (and surprising) tests on my machine, it appears that the map may not even be necessary:

a=np.zeros((3000,300),dtype=np.float32)  
with open(filename) as f:
    for i,line in enumerate(f):
        a[i,:]=line.split() 

This might not be the fastest, but certainly it'll be the most memory efficient way to do it.

Some tests:

import numpy as np

def func1():   #No map -- And pretty speedy :-).
    a=np.zeros((3000,300),dtype=np.float32)
    with open('junk.txt') as f:
        for i,line in enumerate(f):
            a[i,:]=line.split()

def func2():
    a=np.zeros((3000,300),dtype=np.float32)
    with open('junk.txt') as f:
        for i,line in enumerate(f):
            a[i,:]=map(np.float32,line.split())

def func3():
    a=np.zeros((3000,300),dtype=np.float32)
    with open('junk.txt') as f:
        for i,line in enumerate(f):
            a[i,:]=map(float,line.split())

import timeit

print timeit.timeit('func1()',setup='from __main__ import func1',number=3)  #1.36s
print timeit.timeit('func2()',setup='from __main__ import func2',number=3)  #11.53s
print timeit.timeit('func3()',setup='from __main__ import func3',number=3)  #1.72s
mgilson
  • 300,191
  • 65
  • 633
  • 696
  • thanks @mgilson, if I don't find any other efficient approach. I need to settle for this answer. – Maggie Jul 17 '12 at 18:11
  • time taken for 3000x300 matrix file is `0:00:05.885000` – Maggie Jul 17 '12 at 18:16
  • 1
    @Mahin -- what happens if you use `map(float,line.split())` instead of `map(np.float32,line.split())` ? You may actually see a performance boost (from "vectorizing" the cast from `double` to `float`)... – mgilson Jul 17 '12 at 18:59
  • 1
    @Mahin -- Please see my updated answer. Apparently using `np.float32` is really slow (although I don't know why). – mgilson Jul 17 '12 at 19:23
  • I have one problem with this answer. output of this approach is a `numpy.ndarray` when I tried to convert it to list using `tolist()` float32 converted back to float64. I don't know why. How to solve this problem. thanks a lot again :) – Maggie Jul 17 '12 at 19:33
  • @Mahin -- playing around with it a little, it seems that this is always the case: `type(np.arange(10,dtype=np.float32).tolist()[0])`. From the documentation on `tolist` -- "Data items are converted to the nearest compatible Python type" – mgilson Jul 17 '12 at 19:47
  • 1
    @Mahin -- But, lists are your memory problem in the first place. A list takes up significantly more memory than a comparably sized numpy array (factor of ~2) -- So why are you going back to using lists? – mgilson Jul 17 '12 at 19:49
  • thats great... I changed to numpy array and everything works perfect. Btw, `np.float32` is much slower when used within a loop and its slower than `np.float64` and much much slower than `float`. if used once `float32` is faster than other two. here is the question with great explanations http://stackoverflow.com/questions/5956783/numpy-float-10x-slower-than-builtin-in-arithmetic-operations – Maggie Jul 17 '12 at 20:05
  • @Mahin -- Good find on the link. I was thinking about posting a similar question after these results -- Now I don't have to. Good luck!. – mgilson Jul 17 '12 at 20:10