On Intel machines, integer division requires a sequence of two instructions:
- After you store the dividend in the a register (e.g., %eax), you extend the sign of the dividend into the corresponding d register (%edx).
- The idiv or div instruction itself then performs the division and places the quotient in the a register and the remainder in the d register. But why does the operation require the sign extension? What does the algorithm actually do with that sign extension value? How does the remainder wind up in that same space?
I am not asking how to do integer division on Intel or what happens if you do not follow the rules. I want to know why these are the rules. I want to know how the algorithm works, such that it must use the sign-extended register space somehow.
Let's take a manageable example with 8-bit operands—decimal -65 / 3.
-------------------
00000011 ) 11111111 10111111
If it were 65/3 (with a positive dividend), I could see how the left padding would provide room for the subtraction—
00010101
-----------------------
00000011 ) 00000000 01000001
0000 0011
---------
0000 000100
00 000011
---------
00 00000101
00000011
--------
00000010 (R)
(I abbreviated in the above by not showing instances of subtracting 0. Also, I showed the subtractions as such rather than as additions to two's-complement negations of the divisor, but the basic point would remain the same.) Here, the 0 padding makes room to subtract each bit. However, I do not see how this would work when the dividend is a two's-complement negative integer. And, even in this positive case, it's not obvious why the remainder should wind up in the space space that had held the padding, except the mere oonvenience of its already being available.
Thank you!