-1

If I had a python script that created a lookup table which could be read by a webpage (javascript and maybe ajax), what is the most efficient (in speed and if possible size) format to use?

The lookup-table could have 2000 rows.

Here is a data example:

Apple: 3fd4
Orange: 1230
Banana: 942a
...
qwr
  • 9,525
  • 5
  • 58
  • 102
  • 1
    Isn't this primarily opinon-based? – JDurstberger Dec 14 '15 at 14:47
  • 2
    Are you asking us to tell you what _the best_ format is? That's an opinion, and not very constructive. – Nelewout Dec 14 '15 at 14:48
  • @Altoyr the question has been edited – qwr Dec 14 '15 at 14:50
  • 1
    If you have 20,000 rows you probably should be using a database, not a text file. – PM 2Ring Dec 14 '15 at 14:55
  • @PM2Ring Sorry for my typo, I meant 2000 rows. Though it could be 20000 in a different application. – qwr Dec 14 '15 at 14:57
  • question is not any better, it is still too broad AND opinion based and no code, etc –  Dec 14 '15 at 14:58
  • @JarrodRoberson In what way is it opinion based? If I asked "what is the most efficient way to load data from a file with javascript", is that opinion based? – qwr Dec 14 '15 at 15:01
  • efficient in time or space or both? either way you can figure this out yourself with your data set, otherwise it is too broad and just opinion based guessing by anyone else! –  Dec 14 '15 at 15:02
  • @JarrodRoberson This is not any more opinion based than "How often should I commit git"...I am not trying to instigate some XML vs JSON war – qwr Dec 14 '15 at 15:30
  • 1
    As written, the only answers you can get are likely to be opinion-based conjecture because it is too broad. If you can provide one or more usecases, that might help narrow the scope to be answerable. (Example: my data is static and publicly sharable, my users are all connecting via high latency links with mobile devices that have modern HTML 5 browsers and I can't use 3rd party libraries... in which case, the best answer is probably to shove it down via JSON, have them load it into localstorage for caching, and go to town); change any constraint, answer may change – Foon Dec 14 '15 at 17:06
  • *"How often should I commit git".* is about is opinion based as it gets ... if you are trying to prove my point you did a great job! –  Dec 14 '15 at 23:33
  • @JarrodRoberson My point is that it remains [a very popular and unclosed question](http://stackoverflow.com/questions/107264/how-often-to-commit-changes-to-source-control) on SO...It is quite old though. – qwr Dec 15 '15 at 04:03
  • @qwr - this has been explained to death on meta, just because something is popular does not mean it is not off-topic, things that are wrong do not prove that other wrong things are correct. –  Dec 15 '15 at 15:31
  • @JarrodRoberson The main point is that it's unclosed. Its popularity only means many people have looked at it. – qwr Dec 15 '15 at 15:34
  • @qwr - you miss the main point, its existence or state does not valid this question the two are not related, lots of crap from 7 years ago fell through the cracks or was tolerated; that has no bearing on this question; but it will not be open for much longer either way. –  Dec 15 '15 at 15:47

1 Answers1

0

Even though this is primarily opinion based, I'd like to roughly explain to you what your options are.

If size is truly critical, consider a binary format. You could even write your own!

With the data size you are presenting, we are probably talking megabytes of data (depending on the field values and no. of columns), so the format is of importance. Now, a simple csv or plain text file - provided it can be read by the webpage - is very efficient in terms of the additional overhead: simply seperating the values by a comma and putting the table headers on line 1 is very, very concise.

JSON would work too, but does maintain a somewhat larger overhead than just a raw (text) data dump (like a csv would be). JavaScript object notation is often used for data transfers, but really, in the case of raw data it does not make much sense to coerce it into such a format.

Final thoughts: put it into a relational database and do not worry about it any more. That is the tried and tested approach to any relational data set, and I do not really see a reason you should deviate from that format.

Nelewout
  • 6,281
  • 3
  • 29
  • 39
  • It is more speed critical - approximately what size would a csv file be too slow and not appropriate? – qwr Dec 14 '15 at 15:11
  • 1
    @qwr, do not use file storage. Use a database and index it properly if speed is truly your concern. – Nelewout Dec 14 '15 at 15:19
  • apparently others have different **opinions**. –  Dec 14 '15 at 23:36