I am adding (yet) another sub-module to my fairly large project written in C, which I want to run on my little processor that I have. And now I wonder : my new module my be small, but it requires to pile up quite some intermediary information every cycle in order to compute its stuff. I created a data structure to hold all this intermediary data, and that's what I wonder about :
- I could simply create a global instance of this data structure in my module source file, available at all times from everywhere in my module. But this means a constant RAM occupation of the size of this intermediary data, even when other modules are running. That's what I originally had, and that cannot be optimal in any way.
- I could create a local instance of this data structure at the beginning of my update cycle, and pass a pointer to it all the time to each function. But it's horrible from a coding point of view (long function prototypes), and it means everytime I call another function, I have to add another copy of that pointer on top of the stack (right ?)
- I could also create a global variable (scoped to my module) which is simply a pointer to my intermediary data. This means one 8-bit integer constantly there (even when all modules are running), but not more. And then the data is accessible from everywhere in the module without stack overhead and long function prototypes.
Some info as to my system : modules run one after the other, re-entrance only occurs to get results, but not perform further calculations. Intermediary data is really only needed during the update calls of each module, and those always come in the same order.
Which solution is best ? In which light ? Where does the compiler come into play ? Is there any general memory mechanism I do not know about here ?
Thanks in advance for the clarifications !