0

I got asked this question last interview and expect it again. The interview was for firmware engineer that program embedded systems. The microprocessors and micro controllers available for these applications are not very powerful usually and the simpler ones do not have capability to do floating point calculations (no multiplier or divider present inside).

So what are the possible answers to this question than and what (if anything) does fixed point arithmetic have to do with this?

Is it possible to use the Newton-Raphson method to do this calculation at all? How?

Mat
  • 202,337
  • 40
  • 393
  • 406
quantum231
  • 2,420
  • 3
  • 31
  • 53
  • See also [this question](http://stackoverflow.com/questions/1100090/looking-for-an-efficient-integer-square-root-algorithm-for-arm-thumb2). – Craig McQueen Sep 04 '12 at 05:20

3 Answers3

9

As ever with any interview question such as this, the smart response is to ask counter questions. Smart questions intended to clarify and disambiguate the question show that you can think rather than simply spout received wisdom or dogma, and will also narrow the scope of the question so that your answer is more likley to be applicable. In this case there is perhaps a distinction between an MCU without floating point and a software implementation that does not use floating point. If the latter, then certainly fixed-point is relevant, but in many applications an integer square root may be sufficient - you might ask about that, but isqrt(2) == 1 seems unlikly to be adequate in this case. For the former, this situation does not preclude the use of floating-point.

Normally the issue in embedded systems when using floating-point is lack of a floating-point unit (FPU) causing floating-point operations implemented in software to be far slower and less deterministic. Lack of an FPU or even a hardware integer multiply or divide does not preclude the use of floating-point, it simply means that such operations are far slower and require greater code space.

On most systems, even without hardware floating point support the standard library math functions will still be supported by software floating-point and can be used normally - even on 8 bit systems - but don't expect Mega-FLOPS performance. There are occasions when the performance hit and possibly the non-deterministic nature of the library implementation preclude its use, in which case there are a number of algorithms that return either faster or more deterministic performance.

If the values to be rooted are large and an integer result precise enough, then a purely integer solution would be fastest, however for the general case there are a number of solutions amongst which Newton-Raphson is one, but not likely to be optimal - it is rather the bubble-sort of square-root algorithms; used because it is easy to teach and understand, not because of its performance.

Using fixed point is a possibility, but not being an intrinsic data type, the code can become less easy to write and debug. I use a library written by Anthony Williams; it is written in C++ and defines a fixed class; the function and operator overloading capabilities of C++ mean that most floating point code can be ported simply by replacing float or double with fixed. On an ARM processor, the fixed class performs about five times faster than software floating-point. However Anthony's sqrt algorithm can be improved on as I have described here with an implementation based on the article The Neglected Art of Fixed Point Arithmetic - the code in that article is in C so may be more applicable generally (where C++ is either not available or not practical - although that is a different argument!).

Jack Crenshaw devotes a whole chapter to the sqrt() function in his book Math Toolkit for Real-Time Programming, where he starts out with a naive Newton-Raphson implementation and gradually refines it. He also presents an integer solution though interestingly not fixed-point. Some of what Jack includes in the book has already appeared in his journal articles; for example his treatment of integer square-root.

Either way, I might answer the question as follows:

I would evaluate the standard library software floating-point performance, precision and effect on code size and only if I found it to be inadequate for the application requirements would I consider an optimised solution using an established algorithm and possibly fixed-point arithmetic.

Note my use of the term "an established algorithm"; it usefully elides the fact that I may not know or recall the names of any particular algorithm, and what I am really saying is that I do not know what algorithm would be appropriate but I am not foolish enough to reinvent the wheel since I am unlikely to come up with something better than anything not already available, and through careful research and evaluation I would achieve the desired result if at all feasible. If an interviewee came up with that answer and asked smart questions before hand, I would find that more than acceptable. Of course the interviewer may not be as smart as you, and may have a particular answer in mind that he believes to be the "correct" one; you may not want to work in an organisation with that kind of dogmatic response. An interview is a two way process - you are interviewing the organisation to see if you want to give then the benefit of your service.

Community
  • 1
  • 1
Clifford
  • 88,407
  • 13
  • 85
  • 165
4

Yes, you can use Newton-Raphson to calculate a square root. Typically it is done something like this:

Let x be the number. We will first approximate 1/sqrt(x), then multiply that by x to produce x/sqrt(x), which is sqrt(x).

Given x, create a crude estimate for 1/sqrt(x). Often, this is done with a small lookup table, but any reasonable estimate will suffice (including 1). Call the initial estimate e0.

Repeat this step as many times as desired: From the current estimate ei, create a new estimate ei+1 = 1/2·ei·(3−x·ei2). Usually, the number of steps needed can be determined in advance from numerical analysis, the quality of the initial estimate of 1/sqrt(x), and the accuracy required by the application.

Finally, return the last ei·x for the estimate of sqrt(x).

This sequence converges very quickly and is commonly used to calculate the square root. You can start with the Wikipedia page on Newton’s method to understand how this sequence for the reciprocal of the square root is derived. Also note that it does not use division, which is often a slow operation.

You would use fixed point for these calculations because you need more precision than integers provide but floating point is not available.

Essentially, the interviewer was probing to see whether you had experience with numerical algorithms on embedded processors.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312
  • I just wished to know why we need floating points at all. We have fixed point as well and to me it seems sufficient. I never knew about fixed point since I had only heard about floating point precision numbers. It was until I did some research on the topic of the interview question and than that I realized that there is something called fixed point numbers as well. – quantum231 Sep 02 '12 at 03:38
  • 2
    @quantum231: Why we have floating point is a topic for a new question. Fixed point arithmetic may be useful in a limited domain, where the scale of the numbers involved remains within a narrow range and is known in advance. Floating point is hugely more flexible, serving a large range of numbers from very small to very large, and relieves the programmer of the burden of tracking the range and of guarding against overflow (until exceptionally large). It is useful in physics (whether science or game physics), engineering, and many other domains. – Eric Postpischil Sep 02 '12 at 10:30
  • @quantum231: Fixed point is indeed often adequate, and faster where no hardware support for floating point is available. However while it is entirely adequate for sqrt(2), fixed sqrt() algorithms necessarily loose a few bits of precision, so for roots of very small values you can end up with significant errors. Floating-point better handles a wider range of values, fixed point is often a trade-off between range and precision. On the other hand floating-point can equally generate large errors for expressions involving both very large and very small values. – Clifford Sep 02 '12 at 11:56
  • @Clifford - I've implemented the Babylonian method of computing square roots (integer only w/ conversion to float) that is accurate to 6 decimal places with low number of iterations. Significant errors would require taking the square root of really small numbers. For most algorithms out there, you make a tradeoff between speed and precision. The ability to use floating point natively just lets you choose between one implementation or another. There is no native improvement in results simply by having a FPU. – iheanyi Jun 30 '14 at 22:15
3

The ENIAC took a square root only using addition and subtraction. It was based on the formula that the sum of the first n odd integers is n squared.

To calculate the square root of m, find the smallest integer n such the sum of the first n odd integers exceeds m. This can be done by subtracting the consecutive odd integers from m until a negative result is obtained. If n is the smallest integer such that m - (1 + 3 + 5 + ... + 2n-1) < 0 then (n - 1)^2 <= m < n^2. Letting a equal the n-th odd integer 2n - 1, solve the double inequality (n-1) <= sqrt(m) < n for

a - 1 <= 2*sqrt(m) < a + 1

or

(a - 1)/2 <= sqrt(m) < (a + 1)/2

Source: http://www4.wittenberg.edu/academics/mathcomp/bjsdir/ENIACSquareRoot.htm

However, this does not really work for m = 2, only for large numbers.

Felix
  • 35,354
  • 13
  • 96
  • 143
  • hmm, I did expect this question to be here already. I have not seen it especially asked for embedded systems yet. There are techniques on how to find sqrt of a number but not a question following the points I have presented. – quantum231 Sep 01 '12 at 21:55