Depends if using UTF8 a char is 1byte if UTF16 a char is 2bytes doesn't matter if the byte is 00000001 or 10000000 a full byte is registered and reserved for the character once declared for initialization and if the char changes this register is updated with the new value.
a strings bytes is equal to the number of char between "".
example: 11111111 is a filled byte,
UTF8 char T = 01010100 (1 byte)
UTF16 char T = 01010100 00000000 (2 bytes)
UTF8 string "coding" = 011000110110111101100100011010010110111001100111 (6 bytes)
UTF16 string "coding" = 011000110000000001101111000000000110010000000000011010010000000001101110000000000110011100000000 (12 bytes)
UTF8 \n = 0101110001101110 (2 bytes)
UTF16 \n = 01011100000000000110111000000000 (4 bytes)
Note: Every space and every character you type takes up 1-2 bytes in the compiler but there is so much space that unless you are typing code for a computer or game console from the early 90s with 4mb or less you shouldn't worry about bytes in regards to strings or char.
Things that are problematic to memory are calling things that require heavy computation with floats, decimals, or doubles and using math random in a loop or update methods. That would better be ran once at runtime or on a fixed time update and averaged over the time span.