The only significant difference (ignoring "syntactical sugar") is that realloc()
may try to increase the size of the existing area; and vector can't do this and therefore always has to allocate a new area, copy the old data into the new area, then free the old area.
Based on this, there are 2 cases where realloc()
would be superior. The first is when the data is large (and therefore copying all of it is expensive) and being able to increase the size of the area avoids expensive copying.
The second case is where you simply don't have enough (virtual) memory to allow 2 copies to exist at the same time. For example, if you're on a 32-bit computer with 2 GiB of space available for data and you've got a 1.5 GiB array, then you can't allocate a second 1.5 GiB array (not enough space) and must either extend the existing area or fail.
Note that for both of these scenarios (which are relatively rare), both realloc()
and vectors are likely to be inferior to other approaches. For a simple example, instead of having a huge array maybe you can break it up into a linked list of "sub arrays" (e.g. a linked list of 1000 "sub-arrays" that are 1.5 MiB each instead of a single 1.5 GiB array; such that it's easy to create a new "sub-array" and add it to the end of the list). Of course there are also disadvantages to other approaches (e.g. for the "linked list of sub-arrays" example, iteration becomes slightly more complex).