-6

I would like to know more about the traversal time of variables in different data segments. For example, lets say we want to fill an array with 100 000 ints. What would be the difference in traversal time, if the array is in the stack, heap or in the data segment? Would it make any difference if we use alot bigger or alot smaller array - explaining: if, for instance the traversal time in heap is 2x for 100 000 elemnts and 1x for stack, would this proportion be the same if we have different size(10 000 000) ? Also, what would be the difference in process's load time and overall memory usage? Thanks!

EDIT: how can i determine this in a code? what i mean by this - is there any function to calculate execution time, "traversal time" and the other things i am trying to find out?

  • 2
    Why don't you write some code and see for yourself. If you have question about the code come back here and ask them. – Richard Critten Apr 24 '17 at 13:11
  • 1
    Could you be a bit more specific ? The size of your elements will matter. The algorithm you use to iterate other theses elements will matter. Your processor and cache levels will matter (prefetch), and I would even dare say your OS will matter too. – AlexG Apr 24 '17 at 13:13
  • i would be very happy to, could you give me some hint how i could determine all these stuff? is there any functions counting time to execute a process and overall memory usage, i am kinda new to this. – Gibo Gibonski Apr 24 '17 at 13:13
  • 1
    Yup - sounds like time for an experiment and, besides, such data would be environment dependent. – ThingyWotsit Apr 24 '17 at 13:14
  • 'is there any functions counting time': Google 'C time functions': 'About 555,000,000 results. 'overall memory usage': Google 'C memory usage functions': 'About 55,600,000 results' – ThingyWotsit Apr 24 '17 at 13:18
  • TBH, if you wanted to compare times, you could just loop the tests enough times to measure the time with a stopwatch or wall clock. – ThingyWotsit Apr 24 '17 at 13:21
  • lol, i guess you are right, thanks! – Gibo Gibonski Apr 24 '17 at 13:24

2 Answers2

0

To answer your edited question, you can use timers. You start a timer before executing your code and stop it right after. Then subtract Stop-Start to find out the ellapsed time..

Already answered here

Community
  • 1
  • 1
AlexG
  • 1,091
  • 7
  • 15
-1

Memory is memory

What I mean by that: there are no physical different memories for different segments (stack, heap etc.). Moreover memory is Random Access Memory. One property for RAM memory is that accessing data takes the same amount of time regardless of where the data is physically on the chip or of the previous accesses (contrast this this tape memory, or even Hard Disks). So the access to RAM is indiscriminately the same regardless of if we're talking about heap or stack or anything else.

Cache to the rescue

That being said, that's not the whole story. Modern architectures have caches. The discussion about caches is too broad here, but the gist is that caches are smaller, more expensive, but faster memories that "cache" data from RAM. So in real scenarios, data that was access before (temporal locality) or that is near a previously accessed one (space locality) will (most likely) be fed faster to the CPU because it is available in cache.

Ok, that's nice, but what segment is faster?

As a rule of thumb, in general, we say stack memory is faster that heap memory. That got me personally confused at first when I was thinking as in paragraph 1. But you take into account paragraph 2 that makes sense. Due to usage pattern, stack is almost always in cache.

So... use stack?

Unfortunately it isn't as simple as that. It never is, especially when you analyze low level performance. Stack can't be very large. And sometimes, even if you could have your data on stack, there are other reasons when it is preferably to put it on the heap. So, I am sorry (not really) to tell you that the answer is never simple or black and white. All you can practically do is to profile your application and see for yourself. That's relatively easy. Interpreting the results and knowing how to improve them it's a whole other beast.

if, for instance the traversal time in heap is 2x for 100 000 elemnts and 1x for stack, would this proportion be the same if we have different size(10 000 000)

Even for let's say heap only performance isn't linear. Why? Well, caches again. When accessing memory that fits in cache the performance plays nice, then you see a spike just as your data grows beyond the cache (line) size. On relatively older systems you had 3 nice regions clearly delimited corresponding to the 3 cache levels in a computer. You you see a spike as your data goes from fitting in a level to fitting in a higher level and when it din't fit in cache at all it goes down hill. Modern processors have "smart cache" which with some black magic make it appear more as you have 1 big cache instead of 3 levels.

bolov
  • 72,283
  • 15
  • 145
  • 224