154

Possible Duplicate:
Why use iterators instead of array indices?

I'm reviewing my knowledge on C++ and I've stumbled upon iterators. One thing I want to know is what makes them so special and I want to know why this:

using namespace std;

vector<int> myIntVector;
vector<int>::iterator myIntVectorIterator;

// Add some elements to myIntVector
myIntVector.push_back(1);
myIntVector.push_back(4);
myIntVector.push_back(8);

for(myIntVectorIterator = myIntVector.begin(); 
        myIntVectorIterator != myIntVector.end();
        myIntVectorIterator++)
{
    cout<<*myIntVectorIterator<<" ";
    //Should output 1 4 8
}

is better than this:

using namespace std;

vector<int> myIntVector;
// Add some elements to myIntVector
myIntVector.push_back(1);
myIntVector.push_back(4);
myIntVector.push_back(8);

for(int y=0; y<myIntVector.size(); y++)
{
    cout<<myIntVector[y]<<" ";
    //Should output 1 4 8
}

And yes I know that I shouldn't be using the std namespace. I just took this example off of the cprogramming website. So can you please tell me why the latter is worse? What's the big difference?

Community
  • 1
  • 1
CodingMadeEasy
  • 2,257
  • 4
  • 19
  • 31
  • 2
    Please read [contrast with indexing](http://en.wikipedia.org/wiki/Iterator#Contrasting_with_indexing) on Wikipedia. – Jesse Good Jan 17 '13 at 07:18

8 Answers8

247

The special thing about iterators is that they provide the glue between algorithms and containers. For generic code, the recommendation would be to use a combination of STL algorithms (e.g. find, sort, remove, copy) etc. that carries out the computation that you have in mind on your data structure (vector, list, map etc.), and to supply that algorithm with iterators into your container.

Your particular example could be written as a combination of the for_each algorithm and the vector container (see option 3) below), but it's only one out of four distinct ways to iterate over a std::vector:

1) index-based iteration

for (std::size_t i = 0; i != v.size(); ++i) {
    // access element as v[i]

    // any code including continue, break, return
}

Advantages: familiar to anyone familiar with C-style code, can loop using different strides (e.g. i += 2).

Disadvantages: only for sequential random access containers (vector, array, deque), doesn't work for list, forward_list or the associative containers. Also the loop control is a little verbose (init, check, increment). People need to be aware of the 0-based indexing in C++.

2) iterator-based iteration

for (auto it = v.begin(); it != v.end(); ++it) {
    // if the current index is needed:
    auto i = std::distance(v.begin(), it); 

    // access element as *it

    // any code including continue, break, return
}

Advantages: more generic, works for all containers (even the new unordered associative containers, can also use different strides (e.g. std::advance(it, 2));

Disadvantages: need extra work to get the index of the current element (could be O(N) for list or forward_list). Again, the loop control is a little verbose (init, check, increment).

3) STL for_each algorithm + lambda

std::for_each(v.begin(), v.end(), [](T const& elem) {
     // if the current index is needed:
     auto i = &elem - &v[0];

     // cannot continue, break or return out of the loop
});

Advantages: same as 2) plus small reduction in loop control (no check and increment), this can greatly reduce your bug rate (wrong init, check or increment, off-by-one errors).

Disadvantages: same as explicit iterator-loop plus restricted possibilities for flow control in the loop (cannot use continue, break or return) and no option for different strides (unless you use an iterator adapter that overloads operator++).

4) range-for loop

for (auto& elem: v) {
     // if the current index is needed:
     auto i = &elem - &v[0];

    // any code including continue, break, return
}

Advantages: very compact loop control, direct access to the current element.

Disadvantages: extra statement to get the index. Cannot use different strides.

What to use?

For your particular example of iterating over std::vector: if you really need the index (e.g. access the previous or next element, printing/logging the index inside the loop etc.) or you need a stride different than 1, then I would go for the explicitly indexed-loop, otherwise I'd go for the range-for loop.

For generic algorithms on generic containers I'd go for the explicit iterator loop unless the code contained no flow control inside the loop and needed stride 1, in which case I'd go for the STL for_each + a lambda.

Community
  • 1
  • 1
TemplateRex
  • 69,038
  • 19
  • 164
  • 304
  • 1
    Well if iteration is done over only one container I guess using iterators with `next`, `prev`, `advance` functions even in case of need in previous/ next elements and/or different stride would do just fine and possibly will be even more readable. But using several iterators to iterate several containers simultaneously doesn't look very elegant and most likely indexes should be used in this case. – Predelnik May 16 '14 at 13:33
  • 3
    This is a very informative answer! Thank you for laying out the pros and cons of these four different approaches. One question: The index-based iteration uses `i != v.size()` for the test. Is there a reason to use `!=` instead of `<` here? My C instincts tell me to use `i < v.size()` instead. I would expect that either one should work the same, I'm just more used to seeing `<` in a numeric `for` loop. – Michael Geary Sep 21 '17 at 05:02
  • 1
    Using the range loop, wouldn't this require for the container to have the elements in an array like order? Would this still work to get the index with a container which does not store the items in sequential order? – Devolus Nov 14 '17 at 08:05
  • 1
    Not necessarily all range-iteratable containers are array-like, for instance, you can iterate through all the values in a map, and a set(granted it is kindof like an array). – Shipof123 Mar 14 '19 at 22:46
  • Is this code `auto i = &elem - &v[0];` safe for vector and list use ? it looks like it just takes the difference between 2 addresses and that may be questionable correlation to the index offset. – user1502776 Apr 12 '19 at 21:53
  • 1
    the question was in the context of array indices, so contiguously laid out sequences such as `vector` and `array`. So no, it doesn't work for `list` or even `deque`. – TemplateRex Apr 13 '19 at 18:12
  • When you say that not having `break`, `continue`, or `return` is a disadvantage of `for_each`, it's actually one of its advantages. It documents better what the algorithm will be in that it will not include any of those operations, making understanding the code easier and less error prone. Similarly, the lack of different strides in the for each construct is advantageous for the same reason. – user904963 Dec 26 '21 at 04:01
  • 1
    @MichaelGeary The advantage of `<` is that it will work for different increments like `i += 3`. It's relatively standard, however, to use `!=` when the increment is one. – user904963 Dec 26 '21 at 04:05
  • @user1502776 It will not work unless the container is contiguous in memory like a `std::string`, `std::array`, or `std::vector`. If it did work with something like `std::list`, it would be a coincidence that cannot be relied on. – user904963 Dec 26 '21 at 04:07
  • "could be O(N) for list" - what about for a `std::map`, is retrieving the first and last pair with .begin() or .end() O(1)? I imagine `.begin()` must be O(1) on a balanced tree as it has a root node? – Dominic Feb 26 '22 at 22:51
12

With a vector iterators do no offer any real advantage. The syntax is uglier, longer to type and harder to read.

Iterating over a vector using iterators is not faster and is not safer (actually if the vector is possibly resized during the iteration using iterators will put you in big troubles).

The idea of having a generic loop that works when you will change later the container type is also mostly nonsense in real cases. Unfortunately the dark side of a strictly typed language without serious typing inference (a bit better now with C++11, however) is that you need to say what is the type of everything at each step. If you change your mind later you will still need to go around and change everything. Moreover different containers have very different trade-offs and changing container type is not something that happens that often.

The only case in which iteration should be kept if possible generic is when writing template code, but that (I hope for you) is not the most frequent case.

The only problem present in your explicit index loop is that size returns an unsigned value (a design bug of C++) and comparison between signed and unsigned is dangerous and surprising, so better avoided. If you use a decent compiler with warnings enabled there should be a diagnostic on that.

Note that the solution is not to use an unsiged as the index, because arithmetic between unsigned values is also apparently illogical (it's modulo arithmetic, and x-1 may be bigger than x). You instead should cast the size to an integer before using it. It may make some sense to use unsigned sizes and indexes (paying a LOT of attention to every expression you write) only if you're working on a 16 bit C++ implementation (16 bit was the reason for having unsigned values in sizes).

As a typical mistake that unsigned size may introduce consider:

void drawPolyline(const std::vector<P2d>& points)
{
    for (int i=0; i<points.size()-1; i++)
        drawLine(points[i], points[i+1]);
}

Here the bug is present because if you pass an empty points vector the value points.size()-1 will be a huge positive number, making you looping into a segfault. A working solution could be

for (int i=1; i<points.size(); i++)
    drawLine(points[i - 1], points[i]);

but I personally prefer to always remove unsinged-ness with int(v.size()).

PS: If you really don't want to think by to yourself to the implications and simply want an expert to tell you then consider that a quite a few world recognized C++ experts agree and expressed opinions on that unsigned values are a bad idea except for bit manipulations.

Discovering the ugliness of using iterators in the case of iterating up to second-last is left as an exercise for the reader.

6502
  • 112,025
  • 15
  • 165
  • 265
  • 3
    Would you elaborate why `size()` being unsigned is a design bug? I can't see a single reason how `for(int i = 0; ...)` could be preferable to `for(size_t i; ...)`. I've encountered problems with 32-bit indexing on 64-bit systems. – Angew is no longer proud of SO Jan 17 '13 at 07:41
  • 1
    -1: C++ has "serious" type inference. What do you mean? – Sebastian Mach Jan 17 '13 at 07:43
  • 9
    virtual -1: `ugly, longer to type, harder to read` -> a) this is POV, b) `for(auto x : container)`?? – Sebastian Mach Jan 17 '13 at 07:44
  • 1
    C++ has type inference: `auto`. – Yuushi Jan 17 '13 at 07:44
  • Why-1? I thought this was the best answer. – Caesar Jan 17 '13 at 07:47
  • @Caesar: There are multiple explanations for the downvote(s). – Sebastian Mach Jan 17 '13 at 07:48
  • @phresnel The for loop your talking about is a C++11 not everyone has that – Caesar Jan 17 '13 at 07:52
  • 1
    `16 bit was the reason for having unsigned valus in sizes`: Never heard of that. Any citations about that? – Sebastian Mach Jan 17 '13 at 07:52
  • 1
    @Caesar: Yeah, that is the current standard. – Sebastian Mach Jan 17 '13 at 07:52
  • @phresnel Yub, but that doesn't mean all projects are in C++11. – Caesar Jan 17 '13 at 07:54
  • 1
    @Caesar OK then, without the range-based for: `for(auto it = v.begin(); it != v.end(); ++it)`. AFAIK, all at least remotely current compilers [support `auto`](http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport). – Angew is no longer proud of SO Jan 17 '13 at 07:55
  • 1
    @Caesar: And even in pure C++03, there's the nitpicky point that only `std::::size_t` would be the "correct" index type for ``, for which the only guarantee by the Standard is that it is unsigned. – Sebastian Mach Jan 17 '13 at 08:04
  • @Caesar: With your comment late-edit you invalidated my other comment :P – Sebastian Mach Jan 17 '13 at 08:06
  • @Angew: I edited that comment to a link to the explanation – 6502 Jan 17 '13 at 08:10
  • @Yuushi: C++11 has SOME type inference. If you change a data type you still need to go around and change for example every function that uses that type as return value... or you should always use typedefs everywhere. – 6502 Jan 17 '13 at 08:11
  • 1
    @6502: you can also declare your functions as e.g. `decltype(something) foobar()` or `auto frob(Foo a, Bar b) -> decltype(a+b)`. No typedefs harmed. – Sebastian Mach Jan 17 '13 at 08:15
  • @phresnel: using decltype with `->` is a solution only for trivial made-up cases. In general there is no relation between the type of parameters and the type of returned values... putting the type explicitly before the function name or right after it doesn't solve the problem. – 6502 Jan 17 '13 at 08:54
  • 1
    @6502: "In general there is no relation..." -> I disagree, I'd also disagree with "in general there is a relation...". Both of which are too general. In a ray tracer, you will often have many functions with a direct input-type->output-type mapping, in an inventory control, the opposite. It depends. However: The only cases where I see C++'s type inference and other facilities not working well in the case of type-fixing are when your function didn't return the right type on the first hand. E.g., `vector sql_query(string)` should have been `query_result sql_query(string)` from beginning – Sebastian Mach Jan 17 '13 at 09:26
  • 2
    @6502: Regarding size_t's unsignedness: No, it simply means I haven't heard of it yet. And google is relatively silent on the topic for different searches, pointing me (like you) to one of Alf's answers, which makes sense and sounds plausible, but isn't backed up by citations itself. I am not sure why "never heard of it" is the same as "I disagree" to you; that's a ton of speculation. And no, pure reasoning and deep C++ knowledge is not enough; the C++ standard does not contain such anecdote, neither does logic. – Sebastian Mach Jan 17 '13 at 09:35
  • 1
    And no need to be angry when someone points out (potential) flaws in your posts. I could have just downvoted you without explanation, but that would have been more impolite, imho. – Sebastian Mach Jan 17 '13 at 09:36
  • 3
    I mostly agree that unsigned types are unfortunate, but since they're baked into the standard libraries I also don't see good means of avoiding them. An "unsigned type whose value will never exceed `INT_MAX`" doesn't seem to me inherently any more reliable than what the other side proposes, "a signed type whose value will never be less than 0". If your container's size is larger than `INT_MAX` then obviously you can't convert it to `int` and the code fails. `long long` would be safer (especially as it's finally standard). I will never create a vector with 2^63 elements but I might with 2^31. – Steve Jessop Jan 17 '13 at 09:53
  • I realise I probably have enangered you with a misquotation sooner (the one containing "Bullshit???"). I don't remember what I was really referring you. Mea culpa, will remove the comment. – Sebastian Mach Jan 17 '13 at 10:31
  • 1
    @SteveJessop: the problem with unsigned types is that the "strange behavior" happens VERY close to normal usage scenarios (i.e. around zero). The overflow problem on 32 bits is present but only when you've containers with more than two billion elements (tiny elements, I may add) and that's not the "normal" case. IMO that was a design error even when size was 32767 (if that's not enough then soon 65535 won't be enough either... it was just one bit; much less than a single order of magnitude) but I agree this can be more debatable. – 6502 Jan 17 '13 at 11:33
  • @phresnel: I'm not angry... and I will not try to convince you of what I think is obvious (I passed the phase of "late night typing because someone is wrong in the internet" long ago). As a last note please consider that "unsigned" fault is mostly in the choice of the name. A name that better reflects real behavior (only thing that matters) is "modulo integer", "element of Z(2^n)" or "bitmask". I hope you agree that doesn't really make sense to say that the number of elements in a container is a bitmask. Unsigned ints are perfect for many uses... but **NOT** for sizes. – 6502 Jan 17 '13 at 11:40
  • 2
    @6502: To me this just means that one way of dealing with it (use an unsigned type and risk wraparound at 0) has a more obvious problem whereas the other (convert a size to `int`) has a more subtle problem. I actually prefer bugs that occur in common cases, to bugs that evade testing. The problem with converting a size to int isn't specifically that I think the number 2^31-1 "isn't enough". It's that if I'm writing some code that manipulates a vector then I want to accept all values of the type that the caller can create, I don't want to introduce additional confusing restrictions to my API. – Steve Jessop Jan 17 '13 at 12:13
  • @SteveJessop: indeed the bug is unfortunately in the choice that has been made when designing containers and now that is carved in stone and there is no clean escape possible. Should new interfaces fix the issue and use regular or possibly long integers for sizes or should them be consistent with the bad choice made for standard library? I like the former (and this SO question made me discovering that also Meyers agrees on this), you like the latter. And phresnel apparently doesn't even understand what the discussion is about... – 6502 Jan 17 '13 at 13:33
  • @6502: what I do in practice is a bit ad hoc. If I am for some reason doing a simple iteration over a container using an index, then I'll use an unsigned type. If I design an API that's suppose to accept an index likewise (but I use iterators where possible anyway). For arithmetic other than just `++` I make a judgement call, and if there's subtraction involved then I'll usually make sure everything in sight is signed. Like you say, other people advocate using signed for everything. If a user unthinkingly creates an enormous container and uses their code on it tough luck, the program has UB. – Steve Jessop Jan 17 '13 at 14:05
  • And the design decision that I would most specifically criticise in C and C++ is that there are offsets between addresses that can be represented (because unsigned), but for which the difference between the addresses can't (because signed). Pointer subtraction is just broken. If it had never been allowed to have an object bigger than `PTRDIFF_MAX` or `SSIZE_MAX` then people could *completely safely* use signed types everywhere if that's their preference. I like giving people their preference :-) Using a really big signed type gets you there in practice if not in principle. – Steve Jessop Jan 17 '13 at 14:11
  • @phresnel: If you never heard of relation between unsigned size and 16 bit simply means you're too young, you don't read, or you just agree with who thinks that an unsigned is an integer that cannot be negative (this last one is nonsense, but IMO inexplicably popular among self-proclaimed C++ experts). Unsigned types for sizes in C++ are an historical wart and if you don't agree you should go explaining that to Bjarne Stroustrup. Pure reasoning (and knowing how C++ operators work) is however enough to understand that unsigned size is a bad idea. – 6502 Jan 17 '13 at 15:50
  • The whole point of iterators is that it documents that the loop will most likely iterate over every element in the container. You don't have to check much at all to confirm the loop isn't ending earlier than that. If you use `std::for_each`, you have to think even less since it basically guarantees every element will be looped over as well as guaranteeing there is no extra logic related to an early `break`, `continue`, or `return`. The for each construct has similar advantages although it won't work as well in template settings. – user904963 Dec 26 '21 at 04:14
  • @SteveJessop He has a valid point about the unsigned value of `vector::size`. He gives a concrete example where someone unfamiliar might run into problems with `for (int index = 0; index < v.size() - 1; ++index)` when iterating over an empty `vector`. It will, however, only rarely cause problems. It probably solves more problems than it creates for systems where `int` isn't 64-bit. – user904963 Dec 26 '21 at 04:22
  • @6502 returning a `long` wouldn't guarantee much since the max value of `long` is often the same as an `int`. – user904963 Dec 26 '21 at 04:26
  • @user904963: The problem of `size_t` being unsigned would have been solved by choosing the signed version with the same number of bits. The range would have been half but that is not a serious problem (if x is not enough now then 2x won't be enough for long anyway). It would be nice to have a non-negative type with proper semantic (e.g. for which the difference of two non-negatives gives a possibly negative result) but `unsigned` is NOT that and this bad choice has been since then a source of a huge number of bugs. – 6502 Dec 27 '21 at 16:39
9

Iterators make your code more generic.
Every standard library container provides an iterator hence if you change your container class in future the loop wont be affected.

Alok Save
  • 202,538
  • 53
  • 430
  • 533
  • But don't all container classes have a size function? If I were to change the original container the latter should still be able to work because the size method doesn't change. – CodingMadeEasy Jan 17 '13 at 07:23
  • @CodingMadeEasy: in C++03 and earlier, `std::list` had an O(n) `size()` function (to ensure sections of the list - denoted by iterators - could be removed or inserted without needing an O(n) count of their size in order to update the overall container size: either way you win some / lose some). – Tony Delroy Jan 17 '13 at 07:28
  • 1
    @CodingMadeEasy: But builtin arrays don't have a size function. – Sebastian Mach Jan 17 '13 at 07:36
  • 4
    @CodingMadeEasy But not all containers offer random access. That is, `std::list` doesn't (and can't) have `operator[]` (at least not in any efficient way). – Angew is no longer proud of SO Jan 17 '13 at 07:42
  • @phresnel I wasn't aware that you could iterate through arrays. I thought they were only for container classes. – CodingMadeEasy Jan 17 '13 at 07:43
  • @phresnel good point about built-in arrays, but it is easy enough to write a template `size` function to get the size of one. – juanchopanza Jan 17 '13 at 07:48
  • @CodingMadeEasy For C++11, we finally have [`std::begin()`](http://en.cppreference.com/w/cpp/iterator/begin) and [`std::end()`](http://en.cppreference.com/w/cpp/iterator/end) for fixed-array work with iterators. Just fyi in case you didn't use them before. – WhozCraig Jan 17 '13 at 07:58
  • @juanchopanza: But a size function is only possible for statically sized arrays. With iterators you could also do `auto foobar = new float[10]; my_algorithm(foobar, foobar+10);` – Sebastian Mach Jan 17 '13 at 08:00
  • @phresnel That's right, there is no nice way to do it for dynamically allocated arrays (other than using a wrapper class that keeps track of the size). – juanchopanza Jan 17 '13 at 08:13
7

Iterators are first choice over operator[]. C++11 provides std::begin(), std::end() functions.

As your code uses just std::vector, I can't say there is much difference in both codes, however, operator [] may not operate as you intend to. For example if you use map, operator[] will insert an element if not found.

Also, by using iterator your code becomes more portable between containers. You can switch containers from std::vector to std::list or other container freely without changing much if you use iterator such rule doesn't apply to operator[].

Sebastian Mach
  • 38,570
  • 8
  • 95
  • 130
billz
  • 44,644
  • 9
  • 83
  • 100
  • Thank you for that. Once you mentioned std::map it made more sense to me. Since maps don't have to have a numerical key then if I was to change container classes then I would have to modify the loop to accommodate for the map container. With an iterator no matter which container I change it to it will be suitable for the loop. Thanks for the answer :) – CodingMadeEasy Jan 17 '13 at 07:46
4

It always depends on what you need.

You should use operator[] when you need direct access to elements in the vector (when you need to index a specific element in the vector). There is nothing wrong in using it over iterators. However, you must decide for yourself which (operator[] or iterators) suits best your needs.

Using iterators would enable you to switch to other container types without much change in your code. In other words, using iterators would make your code more generic, and does not depend on a particular type of container.

Mark Garcia
  • 17,424
  • 4
  • 58
  • 94
  • So you're saying that I should use the [] operator instead of an iterator? – CodingMadeEasy Jan 17 '13 at 07:24
  • 1
    @CodingMadeEasy It always depends on what you want and what you need. – Mark Garcia Jan 17 '13 at 07:25
  • Yea that makes sense. I'll just keep working at it and just see which one is the most suitable for each situation – CodingMadeEasy Jan 17 '13 at 07:32
  • But `operator[]` is just as direct as iterators. Both just give references to elements. Did you mean `when you need to be able to manually index into a container`, e.g. `cont[x] < cont[x-1]`? – Sebastian Mach Jan 17 '13 at 07:41
  • @phresnel Yes. Point accepted. – Mark Garcia Jan 17 '13 at 07:42
  • Another advantage of iterators is that, even if you will never change out the container, you write the same pattern of code there and elsewhere even if you use different containers. Patterns reduce the amount of bugs. Additionally, iterators typically document immediately that the operation will run over every element in a container. `std::for_each` documents that better while removing the possibility for `break`, `continue`, or `return`. – user904963 Dec 26 '21 at 04:34
1

By writing your client code in terms of iterators you abstract away the container completely.

Consider this code:

class ExpressionParser // some generic arbitrary expression parser
{
public:
    template<typename It>
    void parse(It begin, const It end)
    {
        using namespace std;
        using namespace std::placeholders;
        for_each(begin, end, 
            bind(&ExpressionParser::process_next, this, _1);
    }
    // process next char in a stream (defined elsewhere)
    void process_next(char c);
};

client code:

ExpressionParser p;

std::string expression("SUM(A) FOR A in [1, 2, 3, 4]");
p.parse(expression.begin(), expression.end());

std::istringstream file("expression.txt");
p.parse(std::istringstream<char>(file), std::istringstream<char>());

char expr[] = "[12a^2 + 13a - 5] with a=108";
p.parse(std::begin(expr), std::end(expr));

Edit: Consider your original code example, implemented with :

using namespace std;

vector<int> myIntVector;
// Add some elements to myIntVector
myIntVector.push_back(1);
myIntVector.push_back(4);
myIntVector.push_back(8);

copy(myIntVector.begin(), myIntVector.end(), 
    std::ostream_iterator<int>(cout, " "));
utnapistim
  • 26,809
  • 3
  • 46
  • 82
  • Nice example, but the `istringstream` client call probably won't do what you want, because `operator>>(istream&, char&)` discards all whitespace (and although this can usually be turned off, my cursory glance at cplusplus.com suggests that it can't be turned off *in this case* because a special `sentry` object is created to leave it on... Ugh.) So e.g. if your `expr` was in the file `expression.txt`, the second call to `p.parse()` would (perhaps unavoidably) read `witha` from it as a single token. – j_random_hacker Apr 18 '16 at 16:02
0

The nice thing about iterator is that later on if you wanted to switch your vector to a another STD container. Then the forloop will still work.

Caesar
  • 9,483
  • 8
  • 40
  • 66
-1

its a matter of speed. using the iterator accesses the elements faster. a similar question was answered here:

What's faster, iterating an STL vector with vector::iterator or with at()?

Edit: speed of access varies with each cpu and compiler

Community
  • 1
  • 1
Nicolas Brown
  • 1,546
  • 1
  • 10
  • 17
  • But in that post you just showed me it said that indexing is much faster :/ – CodingMadeEasy Jan 17 '13 at 07:21
  • my bad, i read the results from the benchmark underneath that one. I've read elsewhere where it states using teh iterator is faster than indexing. I'm going to try it myself. – Nicolas Brown Jan 17 '13 at 07:23
  • Alright well thanks and let me know the results that you get – CodingMadeEasy Jan 17 '13 at 07:30
  • 3
    `at()` is different because it range checks and conditionally throws. There's no consistent performance benefit for iterators over indexing or vice versa - anything you measure will be a more-or-less random aspect of your compiler/optimiser, and not necessarily stable across builds, optimiser flags, target architectures etc. – Tony Delroy Jan 17 '13 at 07:31
  • 1
    i agree with @TonyD. In the link i posted, one person is saying indexing is faster while another is saying using the iterator is faster. I tried the code posted; the loop with the iterator took 40 seconds while the one using indexing only took 4. It's only a slight speed difference tho – Nicolas Brown Jan 17 '13 at 07:41
  • Downvoted. It is not a matter of speed, but of modularity and code reuse. The speed depends on the particular implementation, and the guarantees on the iterators are not in terms of speed but code complexity (i.e. "linear access", "logarithmic access" etc). – utnapistim Jan 17 '13 at 12:22
  • figured that one out a bit too late... – Nicolas Brown Jan 17 '13 at 13:11