0

I am quite new to embedded programming.So may be this is a quite easy question for you.

I have seen different linker script file/linker configuration files of different SDK(e.g IAR EWARM, Tasking etc) in which the size of stack/heap are defined.

The size/Range of RAM and flash are also defined of every microcontroler in Linker file.Which are usually taken from memory map of User Manual.(address range are provided i user manual)

My question is how this size of stack and heap are calculated? Can i select any value to the size of stack/heap size? Or is their any criteria foe that?

trincot
  • 317,000
  • 35
  • 244
  • 286
Raza
  • 19
  • 4
    Fun fact: Stack and Heap are implementation details, very common implementation details, but a computer need not use them. If the system running atop of the CPU makes no use of Stacks and Heaps, but uses other concepts for providing Automatic and Dynamic storage, why should the CPU documentation cover Stacks and Heaps? – user4581301 Jul 05 '21 at 02:12
  • For a bare-metal embedded system, one not running an operating system that will make these sorts of decisions for you, the choice is yours with the obvious caveat of don't specify so much of one that you're likely to run out of another type of memory. And don't forget the variables you may have with [static storage duration](https://en.cppreference.com/w/cpp/language/storage_duration#Storage_duration). – user4581301 Jul 05 '21 at 02:20
  • Most of the time I have next to no use for heap in an embedded system because I can't afford the [memory fragmentation](https://stackoverflow.com/questions/3770457). Almost everything is Stack or static. – user4581301 Jul 05 '21 at 02:22
  • Fragmentation is a minor issue. The heap sitting there in RAM taking up tons of space regardless of if/how much of it you use is a bigger issue. But the main issue is that the use of a heap is completely senseless in single core, single process MCU applications [see this](https://electronics.stackexchange.com/a/171581/6102). – Lundin Jul 05 '21 at 08:36

3 Answers3

1

These are not defined in the microcontroller user manual because they are not hardware defined constraints. Rather they are application defined. It is a software dependent partitioning of memory, not hardware dependent.

Local, non-static variables, function arguments and call return addresses are generally stored on the stack; so the required stack size depends on the call depth and the number and size of local-variables and parameters for each function in a call-tree. The stack usage is dynamic, but there will be some worst-case path where the combination of variables and call-depth causes a peak usage.

On top of that on many architectures you have to also account for interrupt handler stack usage, which is generally less deterministic, but still has a "worst-case" of interrupt nesting and call depth. For this reasons ISR should generally be short, deterministic and use few variables.

Further is you have a multi-threaded environment such as an RTOS scheduler, each thread will have a separate stack. Typically these thread stacks are statically allocated arrays or dynamically (heap) allocated rather then defined by the linker script. The linker script normally defines only the system stack for the main() thread and interrupt/exception handlers.

Estimating the required stack usage is not always easy, but methods for doing so exist, using either static or dynamic analysis. Some examples (partly toolchain specific) at:

Many default linker scripts automatically expand the heap to fill all remaining space available after static data and stack allocation. One notable exception is the Keil ARM-MDK toolchain, which requires you to explicitly set a heap size.

A linker script may reserve memory regions for other purposes; especially if the memory is not homogeneous - for example on-chip MCU memory will typically be faster to access than external RAM, and may itself be subdivided on different busses so for example there might be a small segment useful for DMA on a separate buss so avoiding bus contention and yielding more deterministic execution.

The use of dynamic memory (heap) allocation in embedded systems needs to be carefully considered (or even banned as @Lundin would suggest, but not all embedded systems are subject to the same constraints). There are a number of issues to consider, including:

  • Memory constraints - many embedded systems have very small memories, you have to consider the response, safety and functionality of the system in the event an allocation request cannot be satisfied.
  • Memory leaks - your own, your colleagues on a team and third party code may not be as high a quality as you would hope; you need to be certain that the entire code base is free of memory leaks (failing to deallocate/free memory appropriately).
  • Determinism - most heap allocators take a variable and non-deterministic length of time to allocate memory, and even freeing can be non-deterministic if it involves block consolidation.
  • Heap corruption - an owner of an allocated block can easily under/overrun an allocation and corrupt adjacent memory. Typically such memory contains the heap-management meta-data for the block or other flocks, and the actual data for other allocations. Corrupting this data has non-deterministic effects on other code most often unrelated to the code that caused the error, such that it is common for failure to occur some-time after and in code unrelated to the event that caused the error. Such bugs hard hard to spot and resolve. If the heap meta-data is corrupted, often the error is detected when when further heap operations (alloc/free) fail.
  • Efficiency - Heap allocations mage by malloc() et-al are normally 8 byte aligned and have a block of pre-pended meta-data. Some implementations may add some "buffer" region to help detect overruns (especially in debug builds). As such making numerous allocations of very small blocks can be a remarkably inefficient use of a scarce resource.

Common strategies in embedded system to deal with these issues include:

  • Disallowing any dynamic memory allocations. This is common in safety critical and MISRA compliant applications for example.
  • Allowing dynamic memory allocation only during initialisation, and disallowing free(). This may seem counterintuitive, but can be useful where an application itself is "dynamic" and perhaps in some configurations not all tasks or device drivers etc. are started, where static allocation might lead to a great deal of unused/unusable memory.
  • Replacing the default heap with a deterministic memory allocation means such as a fixed-block allocator. Often these have a separate API rather then overriding malloc/free, so not then strictly a replacement; just a different solution.
  • Disallowing dynamic memory allocation in hard-real-time critical code. This addresses only the determinism issue, but in systems with large memories, and carefully design code, and perhaps MMU protection of allocations, there maybe mitigations for those.
Clifford
  • 88,407
  • 13
  • 85
  • 165
0

Basically the stack size is picked depending on expected program size. For larger and more complex programs, you will want more stack size. It also depends on architecture, 32 bitters will generally consume slightly more memory than 8 and 16 bitters. The exact value is picked based on experience, though once you know exactly how much RAM your program actually uses, you can increase the stack size to use most of the unused memory.

It's also custom to map the stack so that it grows into a harmless area upon overflow, such as non-mapped memory or flash. Ideally so that you get a hardware exception, "software interrupt" or similar when stack overflow happens. You should never map it so that it grows into .data/.bss and overwrites other variables there.

As for the heap, the size is almost always picked to 0 and the segment is removed completely from the linker script. Heap allocation is banned in almost every microcontroller application.

Lundin
  • 195,001
  • 40
  • 254
  • 396
  • Thanks Ludin, I am not sure if i understand the last sentence "Heap allocation is banned in almost every microcontroller application." I can see the following definiton of Stack and heap in one of the linker script file: #ifndef USTACK_TC0 #define USTACK_TC0 16k /* user stack size tc0 */ #endif #ifndef HEAP #define HEAP 16k /* heap size */ #endif in which both the Stack and heap are of 16K. One question more: Can i configure the start address of Stack and Heap? Thanks – Raza Jul 05 '21 at 08:15
  • @MuhammadHussnainRaza Compilers need to implement a heap to be compliant, and to keep the quack segment of their user-base happy. Yes, you can configure these but how to do so is very MCU- and linker-specific. In case of ARM you can even drop a constant in flash which gets linked to address zero and that's your stack address. – Lundin Jul 05 '21 at 08:34
  • @Raza : Lundin takes a "just say no" approach to dynamic memory allocation in embedded systems. It is a legitimate stance, but I doubt the _"almost every microcontroller application"_ claim. Embedded applications and microcontrollers cover a broad spectrum, and depending on the platform and application the use of heap allocation ranges from _impractical_ through _dangerous_, _ill-advised_, _to be considered with extreme caution_ to _with care_. This last one "_with care_" applies to any system embedded or otherwise when using a non-garbage collected language such as C. – Clifford Jul 05 '21 at 11:06
  • Note that while I say I doubt the _"almost every microcontroller application"_ claim, that is not to say that I doubt that many of those systems are seriously flawed. Not everyone takes the zero tolerance approach that Lundin does, and many of those are also incompetent in the sense that they do not know there even _is_ an issue to be considered. – Clifford Jul 05 '21 at 11:09
  • @Clifford You shouldn't use something, whatever it might be, if you can't argue about why it makes sense. You shouldn't use heap allocation in MCUs for the same reason that you shouldn't demand that all your MCU chips should be painted bright pink just because you feel like it. There must be a real reason why - in the real world, it might not be technical rationale ("the customer required that all parts are to be painted pink"). But on SO where we discuss technology in general terms, it becomes hard to argue why a heap would ever make sense. – Lundin Jul 05 '21 at 11:23
  • @Lundin we have discussed this issue at length previously. In this instance, the OP seems genuinely interested in the arguments and justification but your answer and comments seem to be little more than "_just take my word for it_". It then becomes _received wisdom_ applied blindly without understanding why. I am simply advocating _informed decision_. Without that, one might choose to inappropriately accept an exception because for example they think it is only a matter of resource availability, and the have ample resource. – Clifford Jul 05 '21 at 11:43
  • I am certainly not advocating unconstrained use of dynamic memory allocation; it is always hard to justify; like global variables and goto (both of which I _do_ have zero tolerance of). Dynamically allocating to a global variable; now there's something to get animated about (or fire someone for). – Clifford Jul 05 '21 at 11:43
  • I am leaning towards @Lundin approach, however it is my understand that you can mitigate some of the issues with heap allocation, by utilizing a slightly larger than required memory pool / slab allocation, especially fragmentation. – Sorenp Jul 05 '21 at 12:58
0

Stack and heap are part of your program itself. They are based on how your program is structured and written How much memory it is taking up. rest free memory will work as Stack or Heap depending on how you set it up.

In Linker script you can define these values.

Dheeraj Kumar
  • 377
  • 2
  • 13