I am taking Computer Architecture subject at University, and I was assigned to program a tool which would take floating point number as input, I guess store it in memory and printout hexadecimal form of the binary representation of the number in IEEE 784 standart.
Now I am certain about the algorithm of converting any decimal floating point number to its binary form in IEEE 784 on paper yet I struggle to come up with a solution for assembler (numbers can be such as -157.4, 0.5, -0.6 and etc.).
My guesses are that I would need to extract the sign, the exponent and mantissa from input using ASCII codes and string manipulation and store either 0 or 1 in memory for the sign, convert what's before the . sign to binary form and shift bits right or left till I get one number, storing the amount of times the program had to shift right (that and +127 would be exponent, right?). Then somehow I should deal with the remaining part of the entered numbers (after . ). Should I multiply it by two, like on paper, or is there a method for this sort of problem? Lastly, the program should convert each 4bits to hex, but I am not sure how.
I don't want copy - paste solutions, I am seeking to learn assembler, to understand inner processes, not just finish the assignments. If anyone has ever dealt with such problems, where should I go first. What should I study? I have nearly three weeks for the task.
(last bit - both emu8086 and NASM should be able to assemble the program).
Thank you!