In the context of hardware cache memory, which is where these concepts usually come up, the analysis is not usually done on a memory address basis, so to speak. Locality is analyzed by access to memory blocks, those which are transferred between cache and main memory.
If you think of it that way, your code has both temporal and spatial locality. When your code reads some_array[0]
, if its address is not found in the cache, it is read from main memory and the whole block which contains it is copied to the cache. It replaces some other block following a certain policy: MRU, for example.
Then, when you access some_array[1]
a short time later, its block is already in cache, so read time will be smaller. Notice that you accessed the same block, and in a small amount of time. So you have both spatial and temporal locality.
Cache memory takes advantage of spatial and temporal locality to provide faster memory access. On the other hand, whether your code can take advantage of this is a whole different issue altogether. Nevertheless, the compiler will do most of the optimizations for you, so you should only concern yourself with this after finding a bottleneck in a profile session. In Linux environments, Cachegrind is great for this.