1

I'm working with large arrays (900, 1000, 10,000) and I need to do simple computations (multiply, divide, etc.). However, I am receiving memory errors. Is there a way to do the following more efficiently, or to declare memory needs in python? Here is what I'm trying to do:

from __future__ import division
import numpy as np
x = np.random.binomial(1, .1, (900,1000, 10000))
y = np.random.binomial(2, .1, (1000,10000))
z = x/y # Or z = np.divide(x,y)

The objects "x" and "y" are made, but I cannot calculate "z".

Thanks.

mac
  • 42,153
  • 26
  • 121
  • 131
mike
  • 22,931
  • 31
  • 77
  • 100
  • 4
    900 * 1000 * 10000 floats take about 33 gigabyte of memory. And I'm being generous, as I assumed single precision = 32 bit, not double = 64 bit. –  Dec 10 '11 at 22:58
  • @delnan - Your comment is right and I deleted my answer, but if you read the bug again, you will see it might well have been related, in particular there is an explicit mention of the Memory Error. Cheers! :) – mac Dec 10 '11 at 23:03
  • @delnan: The given code would use double precision. – Sven Marnach Dec 10 '11 at 23:04
  • I guess it comes down to "How can I do this in a loop?" – Brigand Dec 11 '11 at 02:02
  • Answers to this [previous question](http://stackoverflow.com/questions/1053928/python-numpy-very-large-matrices) might be useful. – Bora Caglayan Dec 11 '11 at 15:37
  • I was previously doing this in a loop, but thought I might be able to speed it up to doing it all at once -- guess it's pretty tough either way. – mike Dec 14 '11 at 00:05

0 Answers0