I am really into understanding programming from the bottom up. So, I have learned the internal construction of a tiny 64kb computer, because I'm super interested in understanding computers from the transistor level. I understand transistors, the creation of multiplexers, decoders, creation of ALU, etc.
I get that for LC3, which is what I learned, opcodes like 0001 011 011 100001 etc will mean that the 0001 will get decoded as an Add instruction etc. Yet, I am confused as to how we can write assembly to lead to this. I understand an assembler translates an instruction like ADD R3, R1, R2 and turns it into machine code, but what's really bugging me is how these ASCII characters get "interpreted" into the machine code.
I know at the electronic level how such an instruction is processed, like JMP to change the Program counter etc, but how at the rudimentary level, how do the assembly instructions turn into machine/binary? I do not get how it goes from assembly to machine code.
I couldn't find much online but a theory I came up with is that the typed keys actually just send an electrical signal which is actually binary, yet still don't get how the computer architecture turns this "ADD" into 0001, as it would need to understand the ADD in its entirety, not just binary for what each character is. So, what is the process of turning the assembly into binary that can then control the logic gates, decodes, sign extension etc?
EDIT: For those asking which book I use, it's Introduction to Computing Systems: From Bits and Gates to C and Beyond 2nd Edition (Patt) It goes from building logic gates from P/N transistors to assembly to C. I could not recommend it more for anyone who wants an overview of the entire process.