2

I need help/advice for improving/commenting my current design please :)

This relates to collision detection in a simple game: Dynamic bodies (moving ones) might collide with static bodies (i.e. ground, walls). I'm porting my Obj-C model to Javascript and am facing memory/performance questions as to my way of implementing this.

I'm using a very basic approach: An array of arrays represents my level in terms of physic opacity.

  • bit set to 0: Transparent area, bodies can go through

  • bit set to 1: Opaque area, bodies collide

Testing the transparency/opacity of a pixel simply goes as follows:

if (grid[x][y]) {
 // collide!
}

My knowledge of JS is pretty limited in termes of performance/memory and can't evaluate how good this approach is :) No idea of the efficiency of using arrays that being said.

Just imagine a 1000-pixel wide level that's 600px high. It's a small level but this already means an array containing 1000 arrays each containing up to 600 entries. Besides, I've not found a way to ensure I create a 1bit-sized element like low-level languages have.

Using the following, can I be sure an entry isn't something "else" than a bit?

grid[x][y] = true;
grid[x][y] = false;

Thanks for your time and comments/advices!

J.

Jem
  • 6,226
  • 14
  • 56
  • 74
  • Btw, has anyone a reference of memory usage per value type? I found this one here, but couldn't confirm it with other sources: http://stackoverflow.com/questions/1248302/javascript-object-size – Jem Jan 04 '12 at 14:12

2 Answers2

1

If you have an 1000x600 grid, you can guarantee you have at least 601 arrays in memory (1001 if you do it the other way round).

Rather than doing this, I would consider using either 1 array, or (preferrably) one object with a mapping scheme.

var map = {};
map["1x1"] = 1;
map["1x3"] = 1;
// assume no-hits are empty and free to move through

function canGoIn(x, y) {
    return map.hasOwnProperty(x + "x" + y);
};

Alternately;

var map = [];
var width = 600;
map.push(0);
map.push(1);
// etc

function canGoIn(x, y) {
    return map[(x * width) + y] == 1;
}
Matt
  • 74,352
  • 26
  • 153
  • 180
  • Hey that's a very interesting idea, I like it a lot. In your first proposal, you are using a string "1x3" as index in the array. I imagine "10000x10000" being the longest value, that 11 characters. Isn't that still huge in terms of memory? – Jem Jan 04 '12 at 14:05
  • @jeM680000: I wouldn't say "huge", but it will have *some* usage obviously. I've just done some tests, and it seems that the single array works out as the most efficient (~11000k). Multiple arrays next (~14000kb) and the map uses ~25000k). (Assumed input of 1000x1000) (See http://jsperf.com/quickest-way-to-represent-a-map for the tests I used; I checked the memory usage of a tab in Chrome 16 on XP SP3) – Matt Jan 04 '12 at 14:12
  • Hey thanks that's awesome. Thanks for your time, made me learn about jsperf as well. Much appreciated! – Jem Jan 04 '12 at 14:23
0

a boolean value won't be stored as just one bit, and that is also true for any other language I know (C included).

If you are having memory issues, you should consider implementing a bitarray like this one: https://github.com/bramstein/bit-array/blob/master/lib/bit-array.js

You will have to make your 2d array into a simple vector and converting your x, y coordinates like this: offset = x + (y * width);

Browsing an array will still lead to a multiplication to evaluate the offset so using a vector is equivalent to arrays.

But I suspect that calling a function (in case your using a bit-array) and doing some evaluations inside will lead to poorer performances.

I don't think you can gain performances and save memory at the same time.

adreide
  • 192
  • 2
  • 9