7

My real-time web app makes ajax requests to obtain JSON econded data responses.

Returned data is usually in the form of an array of objects.

As the array has often a lot of elements (an although data sent is gzipped by the server) in order to keep the response size at minimum I'm keeping the keys in the response very short.

For example instead of using description: I use d:, instead of using width: I use w: and so on...

Doing so reduces the size of the response but, on the client side, very-short non human-readabe keys makes the JavaScript code (that access the object) less readable.

The only solution seem to reparse the response and rebuild the object with pretty keys or replace them in the original object received. But this may hurt the JavaScript code performance resulting in more delay...

Exists a better solution?


EDIT:

As Björn Roberg suggested in his comment I've made a comparison:

pretty-response.json       459,809 bytes
 short-response.json       245,881 bytes

pretty-response.json.zip    28,635 bytes
 short-response.json.zip    26,388 bytes

So as the response is compressed by the server the difference is really minimal.

Still, pretty response require the server to compress 450 KB of data, while short response just 240 KB.

Does this impact server performance (or is there a way to measure it) ?

Paolo
  • 15,233
  • 27
  • 70
  • 91
  • Have you tried using "pretty" keys and compared the actual size of the transfer? – Björn Roberg Nov 16 '13 at 09:25
  • When you say you have a "lot" of data - how much are you talking? You can use [`redis`](http://redis.io/) to "outsource" the storing of JSON objects, which would help if you had a significant number of elements to process – Richard Peck Nov 16 '13 at 09:26
  • 4
    I can't help but feel that, whatever your problem is, there must be a better solution than 450kb of JSON. – lonesomeday Nov 16 '13 at 09:55
  • @lonesomeday despite it's unusual that's exactly the minimum data the web app needs to perform its task upon certain user actions. – Paolo Nov 16 '13 at 10:03
  • @Paolo Sounds like a web app with a bad architecture. – lonesomeday Nov 16 '13 at 10:20
  • 1
    @lonesomeday sounds like presumptuous (other than useless) as you don't know at all what the app does and how it does it. – Paolo Nov 16 '13 at 11:15
  • @Paolo "Hit yourself on the head with a hammer" is bad advice. I can say this confidently, without knowing whether you intend to put up shelves or make a chocolate souffle. Seriously, any application that demands you transfer half a megabyte of JSON for client-side parsing is begging for a rethink. – lonesomeday Nov 16 '13 at 11:18

5 Answers5

9

Since you are considering converting the short keys back to long keys on the client side, you are clearly concerned with the bandwidth requirements for the data and not the memory requirements on the client.

I've generated some files containing random data and three keys (description, something and somethingElse). I've also dumped the data through sed to replace those keys with d, s and e.

This results in:

750K   long-keys
457K   short-keys

HTTP has support for compression, and all significant clients support this with gzip. So, what happens if we gzip the files:

187K   10:26 long-keys.gz
179K   10:27 short-keys.gz

There is very little to choose between them, since gzip is rather good at compressing repeated strings.

So, just use HTTP compression and don't worry about munging the data.

gzip is also a really fast algorithm, so the impact it will have on server performance is negligible.

Quentin
  • 914,110
  • 126
  • 1,211
  • 1,335
1

Maybe you could try protocol buffers and see if that makes any difference. It was developed to be faster and lighter than many other serialization formats (ie. XML and JSON).

Other formats exist that share the same goals, but protocol buffers, aka protobufs, are the one that sprang to my mind.

Refer to this answer for a nice comparison.

Community
  • 1
  • 1
Björn Roberg
  • 2,275
  • 2
  • 15
  • 15
0

You can use the decorator pattern to wrap the objects upon retrieval from the array.

However, given that you probably want to access all the objects returned (why would you return objects that aren't needed by the client?) it would probably be no slower, and possibly faster, to just convert the objects to objects with longer field names upon retrieval from the array.

If you are going to retrieve each object multiple times, you could even go through the array and replace them one by one, to avoid having to repeatedly convert them.

All these options have a performance cost, but it may not be significant. Profile!

Robin Green
  • 32,079
  • 16
  • 104
  • 187
  • Converting the obects to objects with longer field will do require processing a thus time on the client side. I'm ranging from 1,000 to 10,000 elements. As for profiling what's the actual cost it's hard because depends on the hardware/os/browser combination... Too many cases to test! – Paolo Nov 16 '13 at 09:54
  • 1
    Do you really need to handle so many objects on one page? Can't you use lazy loading and/or do more processing on the server? – Robin Green Nov 16 '13 at 09:55
  • Yes I do. I know it's quite peculiar to have a response so big, but that's exactly the minimum data the web app need to perform its task upon certain user actions. – Paolo Nov 16 '13 at 10:02
0

compress your json from server with http://dean.edwards.name/packer/

library for compression http://dean.edwards.name/download/#packer

you can also check your json size by checking online if it reduce or not

rajesh kakawat
  • 10,826
  • 1
  • 21
  • 40
0

If you want your code to be readable and still use short keys, you could use index notation for accessing members:

var descriptionKey = 'd';
var widthKey = 'w';

//...

var description = yourObject[descriptionKey];
Jacob
  • 77,566
  • 24
  • 149
  • 228