0

At the moment we have an AJAX driven site that returns a JSON response in the following format: [{"n":"bob"}, {"n":"jim"}, ..., {"n":"alex"}]. Basically an array of json objects.

However we are thinking about structuring it like so: [["bob"], ["jim"], ..., ["alex"]]. Basically an array of arrays.

The obvious difference is that instead of accessing the data via its properties, we'd be accessing it via assumed index positions (i.e., array[0] == name), but besides that I am curious what the pros/cons are of each. Interesting factors: performance on the client (both constructing the data structure and retrieving its values), bandwidth (obviously on larger data sets or for slower internet connections, say mobile devices), design pattern/best practice, how others have designed their responses, etc...

paul smith
  • 1,327
  • 4
  • 17
  • 32
  • 2
    The question is not precise enough, and the answer have a taste of 'obvious'. – Vincent Cantin Apr 12 '12 at 05:08
  • 2
    What about just `['bob', 'jim', 'alex']`? – neoascetic Apr 12 '12 at 05:08
  • *"assumed index positions"* - You have answered your own question for why this is usually a bad idea. – deceze Apr 12 '12 at 05:16
  • @neoascetic this is just a very basic example. assume there'd be more than just one variable in the list. – paul smith Apr 12 '12 at 05:25
  • @deceze bad idea in terms of...? design? performance? i just stated an obvious consequence, but not sure how it relates to any of the factors i'm interested in :) – paul smith Apr 12 '12 at 05:26
  • The word *"assumed"* is not usually something you'll want to use when talking about your data model. :) First, depending on your backend, array indices may not be guaranteed or it may make it unnecessarily difficult to ensure index positions where named indexes would be trivial. Two, if your data models ever grow, dealing with 20 arbitrarily numbered index positions is a much bigger headache than using named indexes. – deceze Apr 12 '12 at 06:45

2 Answers2

1
  • When using an object-in-array format, you can load more data per item, and not just n.

    [{
            'name' : 'John',
            //more data about john
    },{
            'name' : 'Joe',
            //more data about joe
    }]
    //myArray[0].name = John
    
  • When using the array, the data return is assumed to be of the same kind.

    ['John','Joe'...] //all names
    //myArray[0] = John
    

but it all boils down to how you parse data.


structuring data like this:

[["bob"], ["jim"], ..., ["alex"]]

is totally wrong. why put the name in an array, in an array of names? what's the second level for?

Joseph
  • 117,725
  • 30
  • 181
  • 234
  • Sorry i was trying to just give a simple example to avoid confusion. If you prefer a more complicated example, say each item in the list contains more than just a name, maybe a person's entire profile data. – paul smith Apr 12 '12 at 05:29
  • @paulsmith then objects in an array (the first sample in my answer) would be a safer solution for that. – Joseph Apr 12 '12 at 05:33
0

The array approach will prevent you from using meaningful index names, so I would use JSON objects. You'll also find that most programming languages have good libraries for creating JSON output.

Unless you are dealing with very large data sets, I don't think the array approach is going to provide any noticeable performance benefit.

Nick Clark
  • 4,439
  • 4
  • 23
  • 25
  • Yes from a readability point of view, the JSON Object approach is better. But are you saying that arrays _are_ actually faster than objects? If so, in terms of what? Access? Parsing? Memory? – paul smith Apr 12 '12 at 05:31
  • http://stackoverflow.com/questions/8423493/what-is-the-performance-of-objects-arrays-in-javascript-specifically-for-googl – Nick Clark Apr 12 '12 at 06:29