2

I am currently working on a school project and one of my tasks is to implement a 16-bit by 16-bit 2's complement integer divider as a digital logic circuit (in other words 16-bit input divided by another 16-bit input). The output is straightforward where it shows quotient Q and remainder R. Also special cases like dividing by zero are taken care of with preset conditions.

My primary issue here is that the only way that I am able to implement this is by using long division or a very long recurring subtraction. Even then, I'm not sure how to implement long division without creating a messy circuit. Open to suggestions in case there is no other way.

Because of this, I have looked into other division algorithms like the Newton-Raphson division, but I don't think those algorithms are possible to implement as a logic circuit (and I just don't know and understand how to). So I was wondering if there were any speed-friendly division algorithms to do this.

Sep Roland
  • 33,889
  • 7
  • 43
  • 76
  • [Intel is famous for their implementation](https://en.wikipedia.org/wiki/Pentium_FDIV_bug) of a [fast division algorithm](https://en.wikipedia.org/wiki/Division_algorithm#SRT_division) in the Pentium family of microprocessors. IIRC, the algorithm allows division to proceed two bits at a time, instead of the normal one bit at a time for simple long division. So if this is a school project, I would stick with the simple compare/subtract followed by a one bit shift of the divisor. – user3386109 May 31 '20 at 08:08
  • forget about 2'os complement do an unsigned division instead and handle the sign latter (its just negation + inc/dec) what you do is create 16 bit division from 8 or 4 bit divisions and use LUT for that ... see [division by half-bitwidth arithmetics](https://stackoverflow.com/a/19381045/2521214) – Spektre May 31 '20 at 09:33
  • Also Newton-Rapson is doable in circuitry ... – Spektre May 06 '21 at 10:41

0 Answers0