There are two levels of memory allocation that take usually take place. At operating system level, you map memory pages to your address space. A page is the basic unit of memory management and is usually something like 1K or 4K bytes (but can be much larger or as small as 512 bytes, depending upon the system). It is possible to do that mapping yourself by making the appropriate system calls. However, applications generally only do that when they need large blocks of memory.
Standard libraries generally maintain a pool of pages. When you call malloc, the library looks to see if there is available memory in the pool. If so, it returns a block of memory from pages already mapped by the operating system. If not, the library make the system call to map more pages to the process and adds them to the managed pool.
Mapping and unmapping pages is a rather time consuming process. By using pooling, the library can speed things up significantly.
Invariable, the standard library functions allocate a few bytes in front of the memory returned by malloc and the like so that they can know how much memory is in the block when it is free'd. Many will also add memory add the end of the block as well for error checking.
When you are doing what you are doing, you could be reading this extra data or you could be reading some data that was mapped to the memory pool by the library.
What you are doing is bad.
IF you do not know the number of items in advance, you can use a data structure, such a linked list where new entries are created with each new number.