10

I have a large dataset: 20,000 x 40,000 as a numpy array. I have saved it as a pickle file.

Instead of reading this huge dataset into memory, I'd like to only read a few (say 100) rows of it at a time, for use as a minibatch.

How can I read only a few randomly-chosen (without replacement) lines from a pickle file?

martineau
  • 119,623
  • 25
  • 170
  • 301
StatsSorceress
  • 3,019
  • 7
  • 41
  • 82
  • 3
    Store it in some other format that allows random or incremental access. – martineau Jun 21 '16 at 21:10
  • What do you recommend? Can I convert it from pickle to another format without having to open it? – StatsSorceress Jun 21 '16 at 21:12
  • You will have to load it and dump again in another format – Padraic Cunningham Jun 21 '16 at 21:13
  • It's not answer for Your question but remember about voulnerability of attack. Warning from docs - _The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source._ – MartinP Jun 21 '16 at 21:13
  • 1
    If it's an array of numbers, you could store it as a binary file and use `file.seek()` to access any row of them in the file. The `struct` module can be used to both write and read the file. – martineau Jun 21 '16 at 21:14
  • Thank you Padraic, but is there a format that wouldn't be slow to load? I've tried using a pandas dataframe, and csv, but it took an age. – StatsSorceress Jun 21 '16 at 21:15
  • 4
    maybe a memmap is closer to what you want http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.memmap.html – Padraic Cunningham Jun 21 '16 at 21:18
  • `np.save` and `np.load` can work with `memmap`. `pickle` may actually using `save` methods for arrays. But I believe the `memmap` option is only available through `np.load`. – hpaulj Jun 21 '16 at 21:48
  • `h5py` also allows chunked storage and access. – hpaulj Jun 21 '16 at 22:00

3 Answers3

13

You can write pickles incrementally to a file, which allows you to load them incrementally as well.

Take the following example. Here, we iterate over the items of a list, and pickle each one in turn.

>>> import cPickle
>>> myData = [1, 2, 3]
>>> f = open('mydata.pkl', 'wb')
>>> pickler = cPickle.Pickler(f)
>>> for e in myData:
...     pickler.dump(e)
<cPickle.Pickler object at 0x7f3849818f68>
<cPickle.Pickler object at 0x7f3849818f68>
<cPickle.Pickler object at 0x7f3849818f68>
>>> f.close()

Now we can do the same process in reverse and load each object as needed. For the purpose of example, let's say that we just want the first item and don't want to iterate over the entire file.

>>> f = open('mydata.pkl', 'rb')
>>> unpickler = cPickle.Unpickler(f)
>>> unpickler.load()
1

At this point, the file stream has only advanced as far as the first object. The remaining objects weren't loaded, which is exactly the behavior you want. For proof, you can try reading the rest of the file and see the rest is still sitting there.

>>> f.read()
'I2\n.I3\n.'
Alex Smith
  • 1,495
  • 12
  • 13
5

Since you do not know the internal workings of pickle, you need to use another storing method. The script below uses the tobytes() functions to save the data line-wise in a raw file.

Since the length of each line is known, it's offset in the file can be computed and accessed via seek() and read(). After that, it is converted back to an array with the frombuffer() function.

The big disclaimer however is that the size of the array in not saved (this could be added as well but requires some more complications) and that this method might not be as portable as a pickled array.

As @PadraicCunningham pointed out in his comment, a memmap is likely to be an alternative and elegant solution.

Remark on performance: After reading the comments I did a short benchmark. On my machine (16GB RAM, encrypted SSD) I was able to do 40000 random line reads in 24 seconds (with a 20000x40000 matrix of course, not the 10x10 from the example).

from __future__ import print_function
import numpy
import random

def dumparray(a, path):
    lines, _ = a.shape
    with open(path, 'wb') as fd:
        for i in range(lines):
            fd.write(a[i,...].tobytes())

class RandomLineAccess(object):
    def __init__(self, path, cols, dtype):
        self.dtype = dtype
        self.fd = open(path, 'rb')
        self.line_length = cols*dtype.itemsize

    def read_line(self, line):
        offset = line*self.line_length
        self.fd.seek(offset)
        data = self.fd.read(self.line_length)

        return numpy.frombuffer(data, self.dtype)

    def close(self):
        self.fd.close()


def main():
    lines = 10
    cols = 10
    path = '/tmp/array'

    a = numpy.zeros((lines, cols))
    dtype = a.dtype

    for i in range(lines):
        # add some data to distinguish lines
        numpy.ndarray.fill(a[i,...], i)

    dumparray(a, path)
    rla = RandomLineAccess(path, cols, dtype)

    line_indices = list(range(lines))
    for _ in range(20):
        line_index = random.choice(line_indices)
        print(line_index, rla.read_line(line_index))

if __name__ == '__main__':
    main()
Community
  • 1
  • 1
code_onkel
  • 2,759
  • 1
  • 16
  • 31
-4

Thanks everyone. I ended up finding a workaround (a machine with more RAM so I could actually load the dataset into memory).

StatsSorceress
  • 3,019
  • 7
  • 41
  • 82