0

Both cppreference.com¹ and cplusplus.com² claim that erase on a set or map container, if takes an iterator as its argument, is "amortized constant".

This gives few hints about its worst complexity. Can I hope it will never be worse than logarithmic? At least with reasonable implementations?


¹http://en.cppreference.com/w/cpp/container/map/erase#Complexity

²http://www.cplusplus.com/reference/map/map/erase/#complexity

  • Duplicate of http://stackoverflow.com/questions/200384/constant-amortized-time – Sam Varshavchik Dec 31 '15 at 02:29
  • 1
    @SamVarshavchik , no it's not a dupe. I understand what an amortized time is. I'm asking for worst time. This is different. –  Dec 31 '15 at 02:31
  • No, amortized constant time means amortized constant time, and you have no guarantees about the worst case complexity of a single, individual operation. Those two web sites are quoting directly from the C++ standard, which does not give any other guarantees. – Sam Varshavchik Dec 31 '15 at 02:32
  • @SamVarshavchik , I have no guarantees only unless the Standard or the implementations makes them. And that's what I'm asking to know –  Dec 31 '15 at 02:33
  • E.g. if the map is implemented as a red-black tree, then one erasure may cause a lot of rebalancing. But when you erase all elements, the average number of operations per element is bounded by a constant. – Kerrek SB Dec 31 '15 at 02:34
  • As I said, the C++ standard only guarantees amortized constant time. See 23.2.4 of ISO/IEC 14882-2011. That's it. Whether an implementation can give a better guarantee is going to be up to the implementation. – Sam Varshavchik Dec 31 '15 at 02:40
  • @SamVarshavchik Well, thank you for that. I still hope that any realistic implementation doesn't make this worse than logarithmic *worst case*… –  Dec 31 '15 at 02:43
  • @gaazkam I'm tending to dupe hammer this question, unless you're putting some essential arguments into your question why it's not a duplicate of http://stackoverflow.com/questions/200384/constant-amortized-time. – πάντα ῥεῖ Dec 31 '15 at 02:43
  • @SamVarshavchik: There is a two-argument version of std::map::erase which erases a range of elements. It's complexity is guaranteed to be `O(log S + N)` where S is the size of the container and N is the number of elements to be erased. You could find the successor iterator in O(log S) and use the two-argument erase member function with N=1, resulting in O(log S) time. It would be extraordinary if that did not just apply to the one-argument version. – rici Dec 31 '15 at 02:44
  • @πάνταῥεῖ: That question is about interpreting the meaning of amortized constant. This question is asking whether a particular interface has a clear worst-case guarantee. I don't see how that can possibly be a dupe. – rici Dec 31 '15 at 02:46
  • I would respond to that by saying that this gives an implementation an option of implementing an optimized algorithm for a single argument erase(), then for the two-argument version. After all, amortized constant time complexity does, over time, win over logarithmic complexity. In light of that, I'd say that if you want a floor on your worst case performance, always use the two-argument version of erase(), giving up any optimizations that the one-argument version can bring to the table. – Sam Varshavchik Dec 31 '15 at 02:48
  • @SamVarshavchik: I'd say that it is much more likely that the two-argument version is optimized. To do a range delete, you need to identify the common ancestor of the range, which involves a O(log S) tree walk. After that you can do the entire delete with two cuts, a join, and a rebalance. The red-black rebalance is O(log S) and amortized O(1), but since there has already been an O(log S) term, the amortized guarantee doesn't help here. The O(N) part is the cost of `delete`ing the N node objects. You couldn't achieve this optimization if the single-argument erase were called in a loop. – rici Dec 31 '15 at 03:00

1 Answers1

2

Yes, you can hope that the worst-case complexity of std::map::erase(map::iterator) is logarithmic in the size of the container.

There is a two-argument version of std::map::erase which erases a range of elements, given iterators defining the start and limit of the range. It's complexity is guaranteed (in Table 101, referenced in §23.2.4 [associative.reqmts]) to be O(log S + N) where S is the size of the container and N is the number of elements to be erased. You could find the successor iterator in O(log S) and use the two-argument erase member function with N=1, resulting in O(log S) time. It would be extraordinary if that did not apply to the one-argument version.

That's not an ironclad guarantee, but it is grounds for a reasonable hope.

rici
  • 234,347
  • 28
  • 237
  • 341