I was reading this amazing article: https://www.technovelty.org/linux/plt-and-got-the-key-to-code-sharing-and-dynamic-libraries.html about dynamic and static linking.
After finishing the reading 2 questions are still unanswered or no clear enough for me to understand.
1)
This is not fine for a shared library (.so). The whole point of a shared library is that applications pick-and-choose random permutations of libraries to achieve what they want. If your shared library is built to only work when loaded at one particular address everything may be fine — until another library comes along that was built also using that address. The problem is actually somewhat tractable — you can just enumerate every single shared library on the system and assign them all unique address ranges, ensuring that whatever combinations of library are loaded they never overlap. This is essentially what prelinking does (although that is a hint, rather than a fixed, required address base). Apart from being a maintenance nightmare, with 32-bit systems you rapidly start to run out of address-space if you try to give every possible library a unique location. Thus when you examine a shared library, they do not specify a particular base address to be loaded at
Then how does dynamic linking solve this issue? On the one hand the write mentions we can't use same address and on the other hand he says using multiple addresses will cause lack of free memory. I'm seeing a contradiction hear (Note: I know what's virtual address).
2)
This handles data, but what about function calls? The indirection used here is called a procedure linkage table or PLT. Code does not call an external function directly, but only via a PLT stub. Let's examine this:
I didn't get it, why the handling of data is different that functions? what's the problem of saving function's addresses inside GOT as we used to do with normal variables?