1 bit for char, 4 bytes for int, 8 bytes for doubles etc...
These are general values but they depend on the architecture (per this answer, there are even still 9-bit per byte architectures being sold these days).
Can't the computer make use of the chars or integers currently taking space in the memory and refer to that specific slot when it wants to reuse it?
While this idea is certainly feasible in theory, in practice the overhead is way too big for simple data like characters: one character is usually a single byte.
If we were to set up a system in which we allocate memory for the character value and only refer to it from the string, the string would be made of a series of elements which would be used to store which character should be there: in C this would be a pointer (you will encounter them at some point in your course) and is usually either 4 or 8 bytes long (32 or 64 bits). Assuming you use a 32-bit pointer, you would use 24 bytes of memory to store the string in this complex manner instead of 5 bytes using the simpler method (to expand on this answer, you would need even more metadata to be able to properly modify the string during your program's execution).
Your idea of storing a chunk of data and referring to it multiple times does however exist in several cases:
- virtual memory (you will encounter this if you go towards OS development), where copy-on-write is used
- higher level languages (like C++)
- filesystems which implement a copy-on-write feature, like BTRFS
- some backup systems (like borg or rsync) which deduplicate the files/chunks they store
- Facebook's zstandard compression algorithm, where a dictionnary of small common chunks of data is used to improve compression ratio and speed
In such settings, where lots of data are stored, the relative size of the information required to store the data once and refer to it multiple times while improving copy time is worth the added complexity.