The newly-allocated memory contains garbage and reading a pointer from uninitialized memory is a bug.
If you allocated using calloc( DEFAULT_SIZE, sizeof(Node*) )
instead, the contents of the array would be defined: all bits would be set to zero. On many implementations, this is a NULL
pointer, although the standard does not guarantee it. Technically, there could be a standard-conforming compiler that makes the program crash if you attempt to read a pointer with all bits set to zero.
(Only language lawyers need to worry about that, though. In practice, even the fifty-year-old mainframes people bring up as the example of a machine where NULL
was not binary 0 updated its C compiler to recognize 0 as a NULL
pointer, because that broke too much code.)
The safe, portable way to do what you want is to initialize every pointer in the array to NULL
:
struct Node** const array = malloc(sizeof(Node**) * DEFAULT_SIZE);
// Check for out-of-memory error if you really want to.
for ( ptrdiff_t i = 0; i < DEFAULT_SIZE; ++i )
array[i] = NULL;
After the loop executes, every pointer in the array is equal to NULL
, and the !
operator returns 1 for it, until it is set to something else.
The realloc()
call is erroneous. If you do want to do it that way, the size argument should be the new number of elements times the element size. That code will happily make it a quarter or an eighth the desired size. Even without that memory-corruption bug, you’ll find yourself doing reallocations far too often, which might require copying the entire array to a new location in memory.
The classic solution to that is to create a linked list of array pages, but if you’re going to realloc()
, it would be better to multiply the array size by a constant each time.
Similarly, when you create each Node
, you’d want to initialize its pointer fields, if you care about portability. No compiler this century will generate less-efficient code if you do.
If you only allocate nodes in sequential order, an alternative is to create an array of Node
rather than Node*
, and maintain a counter of how many nodes are in use. A modern desktop OS will only map in as many pages of physical memory for the array as your process writes to, so simply allocating and not initializing a large dynamic array does not waste real resources in most environments.
One other mistake that’s probably benign: the elements of your array have type struct Node*
, but you allocate sizeof(Node**)
rather than sizeof(Node*)
bytes for each. However, the compiler does not type-check this, and I am unaware of any compiler where the sizes of these two kinds of object pointer could be different.