15

Possible Duplicate:
To STL or !STL, that is the question

Are there cases where one should avoid the use of the C++ STL in his / her project?

Community
  • 1
  • 1
Woops
  • 151
  • 1
  • 3
  • Do you mean not use STL at all, i.e. "ban" STL, or just rewrite some of your own classes instead of using STL ones where there is already an STL class to do what you want. – CashCow Feb 03 '11 at 16:07

10 Answers10

6

If you cannot use RTTI and/or exceptions, you might experience that parts of STL won't work. This is the case e.g. for native Android apps. So if it doesn't give you what you need, it's a reason not to use it!

Justin Raymond
  • 3,413
  • 2
  • 19
  • 28
Philipp
  • 11,549
  • 8
  • 66
  • 126
6

When you choose to use a framework like Qt you might consider using lists, vectors, etc from Qt rather that the STL. Not using STL in this case saves you from having to convert from STL to the Qt equivalents when you need to use them in your GUI.

This is debatable and not everyone wants to use everything from Qt

from http://doc.qt.nokia.com/latest/containers.html

These container classes are designed to be lighter, safer, and easier to use than the STL containers. If you are unfamiliar with the STL, or prefer to do things the "Qt way", you can use these classes instead of the STL classes.

Dirk
  • 1,184
  • 6
  • 22
  • 3
    I can subscribe to the idea that STL containers aren't exactly safe and easy to use, but they are that exactly _because_ they are so lightweight (uncompromised performance was a major goal), so stating that Qt's containers are _"lighter"_ than the STL ones seem, um, _odd_ to me. (Disclaimer: I don't know Qt at all.) – sbi Feb 04 '11 at 17:06
4

If you care a lot about executable size, then you might want to avoid using STL in your program.

For example, uTorrent doesn't use STL and that is one reason why it's so small.

Since STL does rely on templates a lot (it is Standard TEMPLATE Library, after all), many times you use templates, the compiler has to generate extra code for every type you use when dealing with STL.

This is compile time polymorphism and will increase your executable size the more you use it.

If you exclude STL from your project (and use templates sparingly or not at all), your code size will get smaller. Note that it won't necessarily be faster.

Also note that I'm not talking about a program's memory usage during execution, since that will depend on how many objects your allocating during your app's lifetime.

I'm talking about your binary's executable.

If you want an example, note that a simple Hello world program, when compiled, might be bigger than a cleverly code demo which can include an entire 3D engine (run time generated) in a very small executable.


Some info regarding uTorrent's size:

Official FAQ (from 2008), this question doesn't appear in recent FAQ.

How was uTorrent programmed to be so efficient?

Second post regarding this.

Third post regarding this.

Note that, even though uTorrent is >300kb and is compressed with UPX, it is still really small when you take into account what it's capable of doing.

darioo
  • 46,442
  • 10
  • 75
  • 103
  • Interesting, I didn't really expect this! Thanks for your answer – Woops Feb 03 '11 at 14:45
  • 9
    uTorrent is >300KB. I don't think using the standard STL containers would make much of a difference. Especially if you consider that writing your own containers will need space too. – Axel Gneiting Feb 03 '11 at 14:46
  • 1
    I/O streams tend to be big. Containers are rather small, as they allocate data on heap anyway. – Cat Plus Plus Feb 03 '11 at 14:48
  • 2
    You should probably explain *why* the STL can lead to code-size bloat - merely including it doesn't increase binary size, after all... – Eamon Nerbonne Feb 03 '11 at 14:49
  • 1
    @PiotrLegnica: I believe darioo is talking about executable code size rather than object size, as templates can lead to an explosion of instantiations and compilers often happily inline them, partially to increase execution speed (the usual size/speed tradeoff) as 1MB executables, for example, aren't a problem on today's desktops. – Fred Nurk Feb 03 '11 at 14:50
  • 1
    @PiotrLegnica: He's talking about executable size, not memory usage. Memory usage is a poor reason to avoid the STL anyhow; generally the need for memory is intrinsic to the problem and STL doesn't particularly bloat datastructures. In fact, due to the increased reliance on templates over e.g. pointers for genericity, it may indeed enable smaller datastructures. – Eamon Nerbonne Feb 03 '11 at 14:52
  • Though comparing hello world programs this way is [problematic](http://www2.research.att.com/~bs/bs_faq.html#Hello-world). – Fred Nurk Feb 03 '11 at 14:59
  • 1
    uTorrent executable is also compressed with [upx](http://upx.sourceforge.net/) – Nick Dandoulakis Feb 03 '11 at 14:59
  • Gah, I *was* thinking about code size, but my brain apparently isn't working today. I'll show myself out. – Cat Plus Plus Feb 03 '11 at 15:08
  • Also [SCARY](http://www2.research.att.com/~bs/SCARY.pdf) iterators (and other constructs) can reduce instantiations and thus code size, but they are often not as straight-forward to write. – Fred Nurk Feb 03 '11 at 15:10
  • "Since STL does rely on templates a lot (it is Standard TEMPLATE Library, after all), many times you use templates, the compiler has to generate extra code for every type you use when dealing with STL." - this is inaccurate. C++ templates are only compiled for types you use. Just because you use `std::vector` does not mean that the compiler also has to create an `std::vector`. Templates, by design, reduce code bloat. – Zac Howland Feb 03 '11 at 16:02
  • Additionally, its been a while since I used uTorrent, but I'm pretty sure the reason it doesn't use the standard library is because it is written to use MFC dynamically. – Zac Howland Feb 03 '11 at 16:19
  • 6
    -1: Completely disagree with these comments: http://stackoverflow.com/questions/367216/does-using-stl-increase-footprint-significantly The fact that it got so many up-votes is disturbing. – Martin York Feb 03 '11 at 18:08
  • @ZacHowland: You said "only compiled for types you use", but that's the same as what you quoted and called incorrect: "for every type you use". Template reduce source code bloat, but that is not the same as executable code size. – Fred Nurk Feb 04 '11 at 01:09
  • @MartinYork: No answer on that question appears any more complete than this answer. – Fred Nurk Feb 04 '11 at 01:14
  • @Fred: You completely misunderstood what I said. Templates are only compiled for the types that are used in the compiled source. If you create a function `template void f(T t)` and only ever call it like so: `f(1)`, the only function compiled into your executable is that integer version of the function. Sure, you can write the function definition like so: `void f(void* p)`, but then you are not really doing the same thing (and are circumventing the strongly-typed feature of C++). – Zac Howland Feb 04 '11 at 13:12
  • @ZacHowland: That is exactly what this answer and the bit you quoted in your comment also say. "the compiler has to generate extra code for every type you use", so it has to generate code for f if you use f, but not f if you don't use f. – Fred Nurk Feb 04 '11 at 13:30
  • @Fred: The "answer" states that it has to create EXTRA code for every type you use. It does not create any extra code; it only creates the code you need. There is nothing extra about it. – Zac Howland Feb 04 '11 at 13:42
  • @ZacHowland: So your comments are *entirely* about a terminological nitpick? Extra is, here, used in the sense of "additional", as in the compiler generates code for every type you use, and this code would not be generated (it is extra/additional) if you did not use those types. – Fred Nurk Feb 04 '11 at 13:46
  • @Fred Nurk: The question linked is not about answering this question , it is about debunking this answer is a complete fallacy. – Martin York Feb 04 '11 at 18:35
  • 1
    @MartinYork: I find that you don't understand any of this answer to be disturbing. – Fred Nurk Feb 04 '11 at 18:36
  • @Fred Nurk: I understand it. I just think it bunkum. See my answer to the other question as to why. Its a common urban legend from the old days when C++ was first introduced into the world. On today's modern compilers these assertions don't hold. – Martin York Feb 04 '11 at 23:10
  • @MartinYork: Your other answer has less than half of the picture. Since you don't seem to be understanding what I'm saying, I'll let Stroustrup explain it: "we demonstrate notable speedups and *reduction in object code size* (real application runs 1.2x to 2.1x faster and STL code is *1x to 25x* smaller)" (emphasis mine) from the abstract of his (and others') OOPSLA 2009 paper. Do explicit problems in 2009 implementations still count as modern? This was even published after your other answer which you claim debunks these "urban legends". – Fred Nurk Feb 05 '11 at 06:01
  • @Fred Nurk: That paper just proves my point. Thanks I can use that as a reference next time. – Martin York Feb 05 '11 at 08:20
  • 2
    I see my answer has been getting a lot of flak lately. And I haven't seen arguments that are convincing enough about why. Just a lot of terminology nit picking and links to answers that (in my opinion) don't drastically differ from my own answer. So, if somebody is authoritative about this matter, I'd like to see an answer that quotes my answer and points out which parts are so blatantly wrong and what's really correct. – darioo Feb 05 '11 at 13:29
  • @MartinYork: How does that paper prove "Its a common urban legend from the old days"? Object code size which is *25 times larger*, to take the worst finding from it, is a significant problem. – Fred Nurk Feb 05 '11 at 18:56
4

Not really. There's no excuse to ban the use of an entire library- unless that lib only serves one function, which is not the case with the Standard library. The provided facilities should be evaluated on a per-function basis- for example, you may well argue that you need a container that performs a more specific purpose than vector, but that is no excuse to ban the use of deque, iostream or for_each too.

More importantly, code generated via template will not be more bloated than the equivalent code written by hand. You won't save code bloat by refusing to use std::vector and then writing your equivalent vector for float and double. Especially in 2011, the size of an executable is pretty meaningless compared to the sizes of other things like media in the vast, vast majority of situations.

Puppy
  • 144,682
  • 38
  • 256
  • 465
3

I would say that there may be occasions where you do not use a particular feature of STL in your project for a given circumstance because you can custom write it better for your needs. STL collections are generic by nature.

You might want in your code:

  • Lock-free containers that are thread-safe. (STL ones are not).
  • A string class that is immutable by nature and copies the actual data "by reference" (with some mechanism).
  • An efficient string-building class that is not ostringstream (not part of STL anyway but you may mean all the standard library)
  • algorithms that use Map and Reduce (not to be confused with std::map. Map and Reduce is a way to iterate over a collection using multiple threads or processes, possibly even distributed on different machines).

Hey, look, so much of boost was written because what the Standard Library provided at the time did not really address the needs of the programmer and thus provided alternatives.

I am not sure if this is what you meant or if you particular meant STL should be "banned" at all times (eg device driver programming where templates are considered bloaty even though that is not always the case).

CashCow
  • 30,981
  • 5
  • 61
  • 92
3

If you are working to particular standards that forbid it.

For example, the MISRA C/C++ guidelines are aimed at automotive and embedded systems and forbid using dynamic memory allocation, so you might choose to avoid STL containers altogether.

Note: The MISRA guideline is just an example of a standard that might influence your choice to use STL. That particular guideline doesn't rule out using all of the STL. But (I believe) it rules out using STL containers as they rely on runtime allocation of memory.

GrahamS
  • 9,980
  • 9
  • 49
  • 63
  • 3
    The stdlib containers don't require using dynamic memory allocation. You can supply your own allocator that does whatever you wish. That said, the std::allocator has problems for some domains which are not easily rectified, which is most clearly shown in the EASTL paper. – Fred Nurk Feb 04 '11 at 14:13
  • @Fred Nurk: As I understand it, the `std` containers *always* use dynamic memory allocation. Yes, you can supply your own `std::Allocator` implementation to control how that allocation occurs (e.g. providing one that allocates from a memory pool). However you still face the potential issue of running out of available memory, which is why MISRA forbids dynamic memory allocation altogether. – GrahamS Feb 04 '11 at 15:31
  • 1
    @GrahamS: The stdlib containers don't require using dynamic memory allocation. They use their allocator to allocate anything. – Fred Nurk Feb 04 '11 at 15:46
  • @Fred Nurk: yes... so they *dynamically* request *memory* from an *allocator*... – GrahamS Feb 04 '11 at 15:52
  • 1
    @GrahamS: "Dynamic memory allocation" has a very specific meaning in C++ and is not the same as doing something "dynamically". Consider a loop which reads user input and *dynamically* performs actions based on it. Or consider "vector v (42);": the allocation performed by vector's ctor isn't "dynamic" in the sense that I know exactly what it will allocate, and from where. – Fred Nurk Feb 04 '11 at 15:56
  • @Fred Nurk: I understand your point, but the STL containers use the `Allocator` implementation to *request memory at runtime* (which meets the Wikipedia definition on "Dynamic Memory Allocation"). The basis of MISRA's objection is that requesting memory at runtime is a common cause of failure on low-memory embedded systems. Even if you implement an `Allocator` that serves up chunks of memory from a statically allocated pool, it still might run out of memory at runtime, so it still doesn't meet the spirit of the MISRA guideline. – GrahamS Feb 04 '11 at 16:18
  • @GrahamS: "which meets the Wikipedia definition on 'Dynamic Memory Allocation'" ... ... *sigh* I don't think you understand my point at all. – Fred Nurk Feb 04 '11 at 16:38
  • @Fred Nurk: and I don't think you understand mine. You seem to be implying that as long as we don't touch the free-store/heap then it isn't *really* dynamic memory allocation so it is fine. That isn't the case. The MISRA objection is to the runtime allocation of memory because of the issues it creates (alloc failures, fragmentation, non-linear alloc time etc). You can write your own `Allocator`, `new` and `malloc` that serve memory from a statically allocated pool, but you still have to service **runtime** memory allocation requests, so you still have the same issues. – GrahamS Feb 04 '11 at 16:56
  • @Graham: "The STL" is a __pure, abstract concept__ for abstracting containers into sequences and letting algorithms operate on those. This concept certainly does __not commit to a specific memory allocation__ strategy! You can use all of its algorithms on __C arrays__ as well as on other sequences. (Think of input iterators.) Also, I have repeatedly pointed out, here on SO, an implementation of a vector-like container that keeps its data on the stack. (See http://stackoverflow.com/questions/3563591/3564923#3564923, for example.) So you're wrong. __The STL is not about dynamic memory.__ – sbi Feb 04 '11 at 17:01
  • @sbi: Yep, I completely agree that plenty of the STL is still perfectly usable without violating that MISRA guideline. However the STL containers do require an `Allocator` that provides allocation of memory at runtime - so they are not usable under that guideline. I'll edit my answer to point out this doesn't rule out all of the STL. – GrahamS Feb 04 '11 at 17:10
  • @GrahamS: The MISRA objection is to memory allocation behavior that isn't known in advance. This is what leads to allocation failure, fragmentation, etc. Given "vector v (42);", I know exactly what is allocated by vector, from where, and when – even though that allocation "happens" at runtime. Does this require care in how you use the STL? Yes, but the entire point of MISRA (and other guidelines for embedded systems) is being careful in how *everything* is done in order to maintain reliability and robustness. – Fred Nurk Feb 04 '11 at 17:11
  • @Fred: Okay so what happens if I call `v.push_back(10)` say 10,000 times on your `vector` there? Or are you saying it is okay to use `vector` as long as it is a constant length (in which case just use a `boost::array` to enforce that). – GrahamS Feb 04 '11 at 17:21
  • @GrahamS: On an embedded system, you'd better know exactly what your stdlib implementation does in areas where the standard gives leeway; this includes vector::push_back, but is definitely not limited to the STL parts of the stdlib. Or you could use vector::reserve which has much less leeway. – Fred Nurk Feb 04 '11 at 17:24
  • @Graham: What happens if you do this __depends on the allocator__. `std::allocator` will happily hand you out the memory as long as `new` succeeds. Other allocators could do other things. (And, as I said before, you do not need to use `std::vector` at all, but that's orthogonal to which allocator you use.) – sbi Feb 04 '11 at 17:37
  • @sbi: I don't think it really matters how the allocator operates, it still has to operate at runtime, so it still has the same issues when compared to static compile-time allocation. – GrahamS Feb 04 '11 at 18:22
  • 2
    @GrahamS: You're focusing too much on the words "runtime" and "memory"; push_back, resize, insert, and reserve don't involve the allocator when the new size is less than the current capacity. Consider local variables in functions: function calls "happen at runtime" and require memory, yet this isn't dynamic memory allocation. – Fred Nurk Feb 04 '11 at 18:26
  • @Fred Nurk: You're focussing on a very specific C++ definition of *"dynamic memory allocation"* which you've yet to reveal me, but I suspect is something like *"using new/malloc to allocate memory from the free-store/heap at runtime"*. The important distinction for MISRA is between statically and dynamically/runtime allocated memory. Stack memory is a completely different thing, as decent tools can determine max stack usage via code analysis (provided recursion is avoided) so there is no issue with running out of memory. Nor do stacks suffer from fragmentation or non-linear time allocation. – GrahamS Feb 04 '11 at 22:38
  • @Fred Nurk: regarding your point about reserving a large enough capacity so that subsequent calls to `push_back`, `resize`, `insert`, `reserve` don't need the allocator - yep that definitely *helps* and is what we sometimes do when we cannot statically allocate, as it at least allows the problem to be more contained, but it is still a violation of the guideline and would have to be approved and documented. Generally though a `boost::array` is a safer, MISRA-compliant choice as it is statically allocated and does not grow. – GrahamS Feb 04 '11 at 23:09
  • @GrahamS: So write an allocator for containers which only uses space on the stack. Just as stack variables can only use a predefined maximum of space, make your allocator fail if its limit is exceeded. Can we agree that if you do this, the container doesn't dynamically allocate memory? – Fred Nurk Feb 04 '11 at 23:11
  • As far as the C++ definition of dynamic memory allocation, I wasn't trying to hide anything ("which you've yet to reveal to me"), but I didn't think I needed to repeat the standard. It's in C++03 §3.7.3 "Dynamic storage duration". – Fred Nurk Feb 04 '11 at 23:12
  • @Fred: well I'd agree that if I did that then I'd have a container that still dynamically allocates memory at runtime, but now does so on the stack. :) Fair enough? But that's not a terribly practical solution, as presumably it only allows for containers that have the lifetime of the method/function they were created in, which is a pretty small subset of normal usage - plus tweaking the stack at runtime is highly error prone, especially cross-platform, and means that automatic analysis to determine max stack size won't work. – GrahamS Feb 04 '11 at 23:28
  • @Fred: I had checked the standard already, but found no definition of "dynamic memory allocation" with a "very specific meaning in C++". In fact that phrase only appears once in the entire doc (§17.4.3.4). The section you cite is about the differences between the automatic, static and dynamic storage durations. If you base your objections on that then consider that the memory reserved by containers via `Allocator` lasts longer than the call to `allocate` so it is not automatic duration and is `deallocated` before the program ends so it is not static duration. That leaves..? – GrahamS Feb 05 '11 at 00:07
  • 1
    @Graham: If you have an allocator that allocates on the stack, you have exactly the same runtime "problems" locale variables have to face. Sure, it's still happening at runtime, but - so what? And if you need to return such a thing - what's the difference to C, which doesn't allow that kind of thing in the first place? See, that argument, too, is moot. – sbi Feb 05 '11 at 01:56
  • @sib: yep I agree, a stack-based allocator would avoid the fragmentation and non-linear alloc cocerns. But doesn't it just introduce new issues? e.g. the max stack size can no longer be calculated at compile time; you still have to deal with `bad_alloc` exceptions; what happens when a vector using space in the middle of the stack needs to reallocate? (Note these issues are caused because it allocates at runtime, instead of statically). Also I don't have the MISRA guidelines to hand here, but they are generally very strict so I doubt they'd approve of direct manipulation of the stack pointer. – GrahamS Feb 05 '11 at 13:55
2

It can increase executable size. if you're running on an embedded platform you may wish to exclude the STL.

tenpn
  • 4,556
  • 5
  • 43
  • 63
1

When you use something like the Qt library that implements identical functionality you may not need the STL. May depend on other needs, like performance.

thorsten müller
  • 5,621
  • 1
  • 22
  • 30
0

The only reason is if you are working on embedded systems with low memory, or if your project coding guidelines explicitly forbid STL.

I can't other reasonable reason to manually roll your own incompatible, bug ridden implementation of some of the features on STL.

Marko
  • 30,263
  • 18
  • 74
  • 108
  • 1
    Embedded systems with low memory can still use parts of the STL (such as std::copy and std::fill). If you don't have enough memory to use the STL, you probably don't have enough room to code in a high level language on an embedded system. – Thomas Matthews Feb 03 '11 at 18:13
0

TR18015 deals with some of the limitation of the STL. It looks at it from a different angle - what compilers could do better - but still is an interesting (if in-depth) read.

I'd be careful in general with microprocessors and small embedded systems. First, compiler optimizations are not up to what you know from desktops, and you run into hardware limits much sooner.

Having said that, it depends a lot on the libraries you use. I/O streams are notoriously slow (and require a careful implementation to not be), whereas std::vector is merely a thin wrapper.

peterchen
  • 40,917
  • 20
  • 104
  • 186