I've been studying C and looking at the volatile keyword when optimizations made an appearance in the discussion. They mention there are compiler optimizations which the volatile keyword protects against so data is stored fresh compared to cached. I don't fully understand what the C compiler optimizations are doing and how it relates to various interconnected topics. All I understand is the data is in memory now so a debugger can view it as compared to cached. I understand cache memory as SRAM sitting inside the CPU, main memory as RAM outside of the CPU as well as a hard drive further outside of RAM and slow to access. There doesn't seem to be any topics on the compiler optimizations when I searched for it and It's made me curious with a few inter-related questions on this topic that no one seems to cover at once but would be helpful:
1.) Why isn't the cached memory available to a debugger? Shouldn't it be able to look through L1-L4 for the stored variable?
I found a similar discussion but I don't understand some of the concepts in full detail: Why is volatile needed in C? 2.) Why are caching variables not accessible from SRAM?
3.) Are variables in the cache not accessible because of pipelining so debuggers will not allow access to view due to potential mismatch?
4.) They mention threads would be disastrous but don't threads share memory and why isn't this accessible as cache as compared to RAM or hard drives? "For example, if you declare a primitive variable as volatile, the compiler is not permitted to cache it in a register -- a common optimization that would be disastrous if that variable were shared among multiple threads."
5.) Deleted
6.) Deleted
7.) Are optimizations standard across all compilers(The O1, O2, O3, O4 arg)?
I've tried compiling code and reviewing the variables but I could not access them in the debugger until using volatile. I was expecting the ability to view them but ended up having to use volatile but the real misunderstanding are the topics combined in one explanation as described above.