Suppose I have a two-dimensional array grid
declared as double grid[5][5]
. It is my understanding that the following statements are true:
- when
grid
is declared, a contiguous block of memory is allocated for 5*5 doubles, no more no less; - when an element of the array is accessed with the notation
grid[i][j]
, this piece of code is actually interpreted as*(grid+(i*5+j))
.
On the other hand, I know I can also store the same matrix as an array of pointers, by writing something like:
double ** grid;
grid = (double**)malloc(sizeof(double*)*5);
for (i=0; i<5; i++)
grid[i] = (double*)malloc(sizeof(double)*5);
and I actually have a code which does that. The problem is that it then proceeds to access the elements of grid
just like before, with the double subscript notation. Is this case different? In this case, is grid[i][j]
converted to *(*(grid+i)+j)
, i.e. to a double dereferenciation? That's the only way I can see it happen correctly.
(This question probably stems from my (lack of) understanding of the relationship between pointer and array types in C...)
EDIT:
Ok, let's see if I got this straight:
grid[i][j]
is always converted to*(*(grid+i)+j)
;- this expression is indeed calculated differently in the two cases, because, as Jim states in his answer, pointer arithmetic takes into account the size of the type pointed to; nevertheless, the correct element is fetched in both cases;
- if and only if "grid" is a 2D array, this expression is further optimized to
*( (double*)grid + (i*5+j) )
, which is possible because the compiler knows that anygrid[i]
is actually an array starting at locationgrid+i*5
.
But this leaves me with an inescapable conclusion: for a 2D array, if I set i=j=0
, then I have that **grid == *((double*)grid)
. Is this correct?