First off, I presume that by "big decimal" you mean that you are representing a rational where the denominator is restricted to be any power of ten.
You'll want to think hard about what you want the output of the division of two decimals to be. Decimals are closed over addition, subtraction and multiplication but not closed over division. That is, any two decimals multiplied together produces a third: (7 / 10) * (9 / 100) gives you 63 / 1000, which is another decimal. But divide those two decimals and you get a rational that does not have a power of ten in the denominator.
To answer the question you actually asked: just as multiplication can be built out of addition in a loop, division can be built out of subtraction in a loop. To divide 23 by 7, say:
- Is 7 < 23? Yes.
- Subtract 7 from 23 to get 16.
- Is 7 < 16? Yes.
- Subtract 7 from 16 to get 9
- Is 7 < 9? Yes.
- Subtract 7 from 9 to get 2.
- Is 7 < 2? No. We have our first digit. We did three subtractions, so the first digit is 3.
- Multiply 2 by 10 to get 20
- Is 7 < 20? Yes.
- Subtract 7 from 20 to get 13.
- Is 7 < 13? Yes.
- Subtract 7 from 15 to get 6.
- Is 7 < 6? No. We did two subtractions and a multiplication by 10, so the next digit is 2.
- Multiply 6 by 10 to get 60
- Is 7 < 60? Yes...
- ...
- We did eight subtractions, so the next digit is 8...
- ... and so on
Do you know of a faster algorithm for this purpose?
Sure, there are lots of faster algorithms for division. Here's one: Goldschmidt's Algorithm.
First off, I hope that it is clear that if you are trying to compute X / D
then you can first compute 1 / D
and then multiply that by X
. Moreover, let us assume WOLOG that D is strictly between 0 and 1.
What if it isn't? If D is negative, invert both it and X; if D is zero, give an error; if D is 1 then the answer is X; if D is greater than 1 then divide it and X both by 10, which should be easy for you since your system is decimal. Keep applying those rules until you have a D between zero and one. (As an additional optimization: The algorithm is slowest when D is very small, so if D is less than, say, 0.1, multiply X and D by ten until D is greater than or equal to 0.1.)
OK, so our problem is that we have a number D between zero and one and we wish to compute 1 / D
. Probably the easiest thing to do is to work an example. Suppose we are trying to compute 1 / 0.7
. The correct answer is 1.42857142857...
Start by subtracting 0.7 from 2 to get 1.3. Now multiply both parts of the fraction by 1.3:
(1 / 0.7) * (1.3 / 1.3) = 1.3 / 0.91
Great. We have now computed 1 / 0.7
to one digit of accuracy.
Now do it again. Subtract 0.91 from 2 to get 1.09. Multiply both parts of the fraction by 1.09:
(1.3 / 0.91) * (1.09 / 1.09) = 1.417 / 0.9919
Great, now we have two correct digits. Now do it again. Subtract 0.9919 from 2 to get 1.0081. Multiply top and bottom by 1.0081:
(1.417 / 0.9919) * (1.0081 / 1.0081) = 1.4284777 / 0.99993439
Hey, now we have four correct digits. See how that goes? Every step of the way the denominator gets much closer to 1, and therefore the numerator gets closer to 1 / 0.7
.
This converges much more quickly than the subtraction method.
Do you see why it works?
Since D is between 0 and 1, there is a number E such that D = 1 - E, and E is also between 0 and 1.
When we multiply D by (2 - D), we are multiplying (1 - E) by (1 + E), which gives 1 - E2.
Since 0 < E < 1, clearly E2 is smaller than E and also between 0 and 1, which means that 1 - E2 is closer to 1. In fact it is a lot closer to 1. By repeating this process multiple times we get close to 1 very quickly. Effectively what we're doing here is roughly doubling the number of correct digits on each step. Obviously that is a lot better than the subtraction method, which gives one additional digit on each step.
Keep on doing that until you have the accuracy you desire. Since you're roughly doubling the number of accurate digits on each step, you should be able to get to an acceptable degree of accuracy pretty quickly. Since we already arranged that D is greater than or equal to 0.1 to start with, E is never larger than 0.9; repeatedly squaring 0.9 gets you down to a very small number pretty quickly.