2

I am using the Zeoslib library in Delphi.

I have a large multidimensional static array that I need to transfer to an empty table in a local MySQL database. How can I do this efficiently?

Just iterate through a million insert statements?

Community
  • 1
  • 1
Mike Furlender
  • 3,869
  • 5
  • 47
  • 75

3 Answers3

9
  1. You can use MySQL syntax: INSERT INTO tab VALUES (v11,..., v1n), ..., (vm1, ..., vmn). IOW, you can collect your array rows into chunks consisting of M rows. This will seriously improve performance. (More)
  2. You can offload array into text file, then use LOAD DATA INFILE statement to load text file efficiently. (More)
  3. You can consider to use other 3d party libraries, like AnyDAC. Which implements Array DML feature, which is intended exactly for your task. (More)
da-soft
  • 7,670
  • 28
  • 36
  • From the looks of it AnyDAC is similar to ZeosLib.. do you know by any chance of ZeosLib has any functions to facilitate this? – Mike Furlender Sep 04 '11 at 09:45
  • (1), (2) are library independent MySQL specific features. (3) is AnyDAC specific feature. ZeosLib does not have such feature. – da-soft Sep 04 '11 at 09:48
0

Multi-dimensional array's don't translate well to MySQL.

If you're dealing with a tiny array you'll probably get things done, but it just doesn't scale. No matter what, it's going to get ugly real soon.

  • For a simple 2-dimensional array, you could consider creating a column for one dimension, and use rows for the other. However, you cannot have more than 4096 columns in a table:

http://dev.mysql.com/doc/refman/4.1/en/column-count-limit.html

There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact limit depends on several interacting factors, listed in the following discussion.

Every table has a maximum row size of 65,535 bytes. This maximum applies to all storage engines, but a given engine might have additional constraints that result in a lower effective maximum row size.

  • Or, you can create a row per array item, and store the index of each dimension as a primary key.But when you want to retrieve the values, you need to create a monster query.
  • You either need to do:
    • multiple joins, but you can only do 61 joins in a query:

http://dev.mysql.com/doc/refman/5.0/en/joins-limits.html

The maximum number of tables that can be referenced in a single join is 61. This also applies to the number of tables that can be referenced in the definition of a view.

    • Or create a query, group by the row key, and conditionally pick the right value with the combination of a group function and a condition, something like this: sum(if(x=1,y,0))
Community
  • 1
  • 1
Wouter van Nifterick
  • 23,603
  • 7
  • 78
  • 122
  • @Gabor: Exactly. My whole point is that MySQL is not well-equipped to deal with multi-dimensional array's. Above here I present some solutions on how to get it done, but none of it is efficient or clean. – Wouter van Nifterick Sep 07 '11 at 23:36
-1

I had the same problem, only in php 2D arrays. Save the dimensions of the array (x, y, z etc. length, meaning the number of values on each level). then join the whole array into ane long string, divide with a special, unique character like | or ,, and when you fetch the data you can split the sting based on the dimensions data.

If you need it, I can show you my php code, but I see you prefer delphi.

EDIT: this is an answer for your question before you edited it. Now it's kind of irrelevant.

Gabor Magyar
  • 926
  • 2
  • 9
  • 20
  • The thing is I have to do this in real time, as quickly as possible, with thousands of rows. Parsing a string seems like it would slow it down a lot... (plus, can strings even be that big?) – Mike Furlender Sep 04 '11 at 08:50
  • In response to the edit, that's why I used the qualifier "efficiently" :) – Mike Furlender Sep 04 '11 at 08:52
  • I didn't know that. But nevertheless, I still don't know of any other solution for this. Maybe sy else will. – Gabor Magyar Sep 05 '11 at 04:10
  • This just doesn't make any sense. If you serialize to a string and dump that into a database, you'll miss out on all the good things that databases have to offer. You can't create an index on all of the dimensions. You can't enforce any structure of the data, you cannot enforce certain types, you can't create foreign key's, etc. And unless you'll deal with tiny arrays this is going to be painfully slow and ugly. Seriously. Think of the horror of just having to modify a single value in your blob. – Wouter van Nifterick Sep 07 '11 at 23:44
  • If you store your array as XML, at least you'll have some functions built in into MySQL to query values in complex structures, but I've used it, but it's slow and doesn't scale. In general I'd say that XML in a MySQL database is better than CSV strings, but it's still a bad idea to do stuff like that. Better rethink your design if you think you need this. – Wouter van Nifterick Sep 07 '11 at 23:49
  • I see your point, but I do not need keys or editing specific values in the arrays, so this method suits me well. all I'm doing is storing a varied size rectangular field of random letters, with words hidden amongst the random letters, and each letter is one value of the 2D array, so I think you agree that I don't need anything elaborate for that. – Gabor Magyar Sep 11 '11 at 15:18