0

I am building a flexible, lightweight, in-memory database in Python, and discovered a performance problem with the way I was looking up values and using indexes. In an effort to improve this I've tried a few options, trying to balance speed with memory usage. My current implementation uses a dict of dicts to store data by record (object reference) and field (also an object reference). So for example, if I have three records with three fields, where some of the data is missing (i.e. NULL values)::

{<Record1>: {<Field1>: 4, <Field2>: 'value', <Field3>: <Other Record>},
{<Record2>: {<Field1>: 4, <Field2>: 'value'},
{<Record3>: {<Field1>: 5}}

I considered a numpy array, but I would still need two dictionaries to map object instances to array indexes, so I can't see that it will perform be any better.

Indexes are implemented using a pair of bisected lists, essentially acting as a map from value to record instance. For example, and index on the above Field1>:

[[4, 4, 5], [<Record1>, <Record2>, <Record3>]]

I was previously using a simple dict of bins, but this didn't allow range lookups (e.g. all values > 5) (see Python hash table for fuzzy matching).

My question is this. I am concerned that I have several object references, and multiple copies of the same values in the indexes. Do all these duplicate references actually use more memory, or are references cheap in python? My alternative is to try to associate a numerical key to each object, which might improve things at least up to 256, but I don't know enough about how python handles references to know if this would really be any better.

Does anyone have any suggestions of a better way to manage this?

Reimplementing the critical parts in C is an option I want to keep as a last resort.

For anyone interested, my code is here.

Edit 1:

The question, simple put, is which of the following is more efficient in terms of memory usage, where a is an object instance and i is an integer:

[a] * 1000

Or

[i] * 1000, {a: i}

Edit 2:

Because of the large number of comments suggesting I use an existing system, here are my requirements. If anyone can suggest a system which fulfills all of these, that would be great, but so far I have not found anything which does. Otherwise, my original question still relates to memory usage of references in python.:

  • Must be light-weight and in-memory. Definitely not a client/server model.
  • Need to be able to easily alter tables, change fields, change rules, etc, on the fly.
  • Need to easily apply very complex validation rules. SQL doesn't meet this requirement. Although it is sometimes possible to build up very complicated statements, it is far from easy.
  • Need to support joins and associations between tables. Many NoSQL databases don't support joins at all, or at most only simple joins.
  • Need to support a method of loading and storing data to any file format. I am currently implementing this by providing a framework which makes it easy to add new formats as needed.
  • It does not need persistence (beyond storing data as in the previous point), and does not need to handle massive amounts of data, i.e. not more than a couple of million records. Typically, I am dealing with a few thousand.
Community
  • 1
  • 1
aquavitae
  • 17,414
  • 11
  • 63
  • 106
  • 2
    I'm not sure I entirely understand your data structure, but why are you reinventing the wheel? – Katriel Dec 03 '12 at 13:31
  • A reference in Python is fundamentally a pointer to a PyObject, so yes, each reference will use a small bit of memory. If you care about that sort of thing, though, you should indeed be looking at writing the critical parts in C. – Katriel Dec 03 '12 at 13:32
  • you may want to look at pandas [DataFrame](http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) object, it's kinda like an excel sheet in memory, the only downside is that it doesn't play well with mixed data types (because it is a numpy arrays under the hood), it does however support fuzzy matching! – jcr Dec 03 '12 at 13:40
  • @katrielalex: If it is just a C pointer then that's fine, and answers my question. I was worried it would be something a bit larger, as python C objects tend to be. – aquavitae Dec 03 '12 at 13:41
  • @azorius: Mixed data types are one of my main requirements. The reason I'm developing this is that I haven't found anything flexible enough for the sort of data I'm dealing with, which can be very mixed up. I considered something like google's BigTable, but that's a bit to far from traditional SQL. One of my main usages is preprocessing messed up data for input to a SQL database. – aquavitae Dec 03 '12 at 13:44
  • You should use [MongoDB](http://www.mongodb.org/) – YXD Dec 03 '12 at 13:45
  • Hmm, I think you've read more into my comment than I meant. If you have multiple copies of an object then you will need more memory than if you just have one copy and many references to it. Again, have you considered not reinventing the wheel? An in-memory sqlite database, or `pandas.DataFrame`s, or any lightweight DB would save you worrying about this stuff. – Katriel Dec 03 '12 at 13:45
  • Also @azorius is wrong: `DataFrame`s are _very good_ with mixed data types. – Katriel Dec 03 '12 at 13:46
  • @katrielalex: I used sqlite3 for a long time before I came to the conclusion that it was too restrictive. My data is just too unstructured. – aquavitae Dec 03 '12 at 13:50
  • @MrE: I only discovered MongoD recently and haven't had a chance to look at it yet, but it might just be what I'm looking for! I'll investigate. – aquavitae Dec 03 '12 at 13:51
  • 2
    "flexible, lightweight, in-memory database" - There are already loads of these. Stop wasting your time, and use an existing solution. I guarantee something in version 4 will be faster and more featureful than your own homebrew version. – Marcin Dec 03 '12 at 14:26

3 Answers3

1

Each reference is in effect a pointer, each pointer requires a small amount of memory.

You can use memory profiler to view memory use on a line by line basis. In this way you can see what happens when you make a reference.

Sheena
  • 15,590
  • 14
  • 75
  • 113
  • 2
    `sys.getsizeof` isn't the size of a reference, it's the size of the references object (for some implementation-specific and rarely useful definition of size). –  Dec 03 '12 at 14:53
0

Python does not specify a particular implementation for dynamic memory management, but from the semantics of the language one can assume that a reference uses memory similar to a C pointer.

Apalala
  • 9,017
  • 3
  • 30
  • 48
0

FWIW, I ran some tests on a 100x100 structure, testing a sparsely populated dictionary structure, a fully populated dictionary structure, a list, and a numpy array. The latter two had a dictionary mapping object references to indexes. I timed getting every item in the structure by index (returning a sentinel for missing data in the sparse dict), and also reported the total size. My results were somewhat surprising:

Structure     Time     Size
============= ======== =====
full dict     0.0236s  6284
list          0.0426s  13028
sparse dict   0.1079s  1676
array         0.2262s  12608

So the fastest and second smallest was a full dict, presumable because there was no need to run a key in dict check on it.

aquavitae
  • 17,414
  • 11
  • 63
  • 106