Memory is memory
What I mean by that: there are no physical different memories for different segments (stack, heap etc.). Moreover memory is Random Access Memory. One property for RAM memory is that accessing data takes the same amount of time regardless of where the data is physically on the chip or of the previous accesses (contrast this this tape memory, or even Hard Disks). So the access to RAM is indiscriminately the same regardless of if we're talking about heap or stack or anything else.
Cache to the rescue
That being said, that's not the whole story. Modern architectures have caches. The discussion about caches is too broad here, but the gist is that caches are smaller, more expensive, but faster memories that "cache" data from RAM. So in real scenarios, data that was access before (temporal locality) or that is near a previously accessed one (space locality) will (most likely) be fed faster to the CPU because it is available in cache.
Ok, that's nice, but what segment is faster?
As a rule of thumb, in general, we say stack memory is faster that heap memory. That got me personally confused at first when I was thinking as in paragraph 1. But you take into account paragraph 2 that makes sense. Due to usage pattern, stack is almost always in cache.
So... use stack?
Unfortunately it isn't as simple as that. It never is, especially when you analyze low level performance. Stack can't be very large. And sometimes, even if you could have your data on stack, there are other reasons when it is preferably to put it on the heap. So, I am sorry (not really) to tell you that the answer is never simple or black and white. All you can practically do is to profile your application and see for yourself. That's relatively easy. Interpreting the results and knowing how to improve them it's a whole other beast.
if, for instance the traversal time in heap is 2x for 100 000
elemnts and 1x for stack, would this proportion be the same if we have
different size(10 000 000)
Even for let's say heap only performance isn't linear. Why? Well, caches again. When accessing memory that fits in cache the performance plays nice, then you see a spike just as your data grows beyond the cache (line) size. On relatively older systems you had 3 nice regions clearly delimited corresponding to the 3 cache levels in a computer. You you see a spike as your data goes from fitting in a level to fitting in a higher level and when it din't fit in cache at all it goes down hill. Modern processors have "smart cache" which with some black magic make it appear more as you have 1 big cache instead of 3 levels.