I just looked at the doc and what it says is if a procedure is encountered more than once in a single stack sample, it is only counted once.
(I assume here the term "procedure" should really be "line of code in a procedure".)
What that means is - if there is recursion going on, you don't want the time cost of a function to be artificially amplified by the depth of the recursion.
Also, I would point out some other things (explained in more detail here):
The first column is "% time" not counting children, i.e. self percent. That's a useless statistic, because there is almost nothing that is not a procedure call, and looking at the line you can tell if it is a procedure call.
The second column is "cumulative time" including children. That's fine, but it should be percent, so that you don't have to divide by total time to see what percent it is.
The reason that number matters is it represents what the line is responsible for - the fraction of total time that could be saved if that line were not there.
The third column is "self time" which, as I explained, is a useless statistic.
The fact that it's extremely small or zero reflects its uselessness.
Since it is included in the cumulative time, if it were not extremely small, the cumulative time would also show it, so it tells you nothing that the cumulative time doesn't.
Also, as the author points out, samples are suspended during I/O,
so if it is doing some I/O that you didn't want or ask for, deep in some library,
and if that is making the program take 100 times as long as it otherwise would,
the profiler (and you) will be totally unaware of it.