3

Is there any published O(log b) algorithm to determine if a b-bit number is square of an integer?

(I apologize if this question is beyond the scope of this site and would happily retrieve it if so)

Update: I realize that my question as posed is unreasonable. So let me amend it by asking for any algorithm that is sub polynomial operations in b. Not necessarily O(log^k b) for a constant k, and has "reasonable" space complexity. Operations are defined in the usual sense: something that is reasonable for the task at hand (e.g. add, negate, xor, and, or, etc.)

Post script: Now I realize that my question is nonsense. The cost of computing floor square root of an n-bit number is less than multiplying two n-bit numbers.

  • You might find this link useful: http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Digit-by-digit_calculation – Maroun Nov 10 '13 at 07:04
  • Wouldn't a binary search find it in log n time? – asimes Nov 10 '13 at 07:08
  • 3
    cstheory.stackexchange.com would probably give you better answers. – Millie Smith Nov 10 '13 at 07:10
  • @Maroun thank you Maroun for you comment. I am aware of that and similar algorithms. –  Nov 10 '13 at 07:10
  • @asimes- Since we're talking about complexity in terms of the number of bits in the input, the cost of multiplying the intermediary terms would be little-omega(log b). – templatetypedef Nov 10 '13 at 07:11
  • There are some awesome methods around, but none seem to have Big-O analysis calculated, such as: http://stackoverflow.com/questions/295579/fastest-way-to-determine-if-an-integers-square-root-is-an-integer and the math is beyond me. – BrianH Nov 10 '13 at 07:11
  • You may also benefit from this page: http://math.stackexchange.com/questions/41337/efficient-way-to-determine-if-a-number-is-perfect-square – bdean20 Nov 10 '13 at 07:37
  • 1
    @BrianDHall well you cannot search for fastest ways to compute sqrt ans hope that complexity is better. Mostly its the opposite because optimized code is not about lowering complexity but to use fast operations for example if you use a superb approx polynom then complexity is O(1) but the O(log N) approach can be 1000x times faster (and usually even faster) – Spektre Nov 10 '13 at 09:59
  • 1
    Also useful: http://math.stackexchange.com/q/131330, which appears to be a duplicate of the question you posted on cstheory. – Jim Mischel Nov 11 '13 at 02:59
  • 1
    The way I'm reading it: It's impossible. It takes O(b) work to read a b-bit number. And you need to read all b-bits to be able to confirm that the number is a perfect square. (Though disproving a b-bit number from being a square is sometimes possible without looking at all the bits.) – Mysticial Nov 11 '13 at 20:16
  • But if you had infinite parallel processors, it's possible to do it in `O(log(b)^2)` time. Since the critical path of the best known square root algorithm is about `O(log(b)^2)` steps. – Mysticial Nov 11 '13 at 20:21

1 Answers1

0
  1. if b is small enough then with use of sqrt table is complexity O(1)
  2. if not then with use of bit approximation is complexity O(b/2) = O(log n)
  3. with use of perfect square table is the complexity also O(b/2) = O(log n)

but with much faster operations (just compare and bit operations) b bit number can be a perfect square of max b/2 bit number so the table has 2^(b/2) entries of b bit numbers and approximate index search (similar to binary search) takes b/2 steps

  1. some improvement can be done with approximation

create approx function y=approx_sqrt(x); and compute y now you can just check values from < y-c , y+c > which have runtime ~T(2c) where c is constant dependent on the approximation accuracy (1,2,3,...). Most approximations are off on the bigger values so you can make c=log(b) and your complexity is suddenly O(log b) = O(log log n) which is what you are searching for I think.

Here is my tested implementation of above:

    //---------------------------------------------------------------------------
    int  is_square(int x,int &cnt)      // return sqrt(x) if perfect or 0, cnt = num of cycles ~ runtime
        {
        int y,yy;
        // y=aprox_sqrt(x)
        for (y=x,yy=x;yy;y>>=1,yy>>=2); // halves the bit count
        if (y) y=(y+(x/y))>>1;          // babylonian approximation
        if (y) y=(y+(x/y))>>1;
        if (y) y=(y+(x/y))>>1;
        // check estimated y and near values
        cnt=1;
        yy=y*y; if (yy==x) return y;
        if (yy<x) for (;;)
            {
            cnt++;
            y++;
            yy=y*y;
            if (yy==x) return y;
            if (yy> x) return 0;
            }
        else for (;;)
            {
            cnt++;
            y--;
            yy=y*y;
            if (yy==x) return y;
            if (yy< x) return 0;
            }
        }
    //---------------------------------------------------------------------------

I added cnt for you so you can test the complexity yourself. My used approx need a good start value so I use halving the bit count which is obviously O(log b) but for bignums and float values the exponent/bit count is already known so it converts just to single bit/word/base/exponent shift O(1). BTW that is the IEEE float magic done by most of the approximations for sqrt or log functions.

  1. My measurements are better than my first estimates (even for non-precise babylonian approximation):

     /*----------------
     |          N | T |
     ------------------
     | 1000000000 | 6 |
     |  100000000 | 4 |
     |   10000000 | 2 |
     |    1000000 | 2 |
     ----------------*/
    

where N is loop N on which it was tested. T is max cnt value on tested numbers < 0,N > for different approximation (better suited for your needs) can look here

So my answer to your question is YES it does exist an algorithm faster than O(log n) for determining if n is perfect square (for instance mine above which also compute the sqrt) but if you are also counting the complexity of basic functions than I am afraid that the answer is NO because even bit operations are O(log n) on bignums !!!

BTW the binary search sqrt can be done also without multiplication.

miken32
  • 42,008
  • 16
  • 111
  • 154
Spektre
  • 49,595
  • 11
  • 110
  • 380
  • I am aware of that (therefore is that clause if b is small enough, i also sometimes work with big numbers (12000++ bits) ... I know my O notation is not correct but i use it anyway because it shows the difference between 'the same complexity' algorithms in cycle terms, O(n) and O(2n) should be still O(n) but then i am not seeing what is 'faster'... Bullets 2,4 and 5 applies for bignums, table algorithms (bullets 1,3) not – Spektre Nov 10 '13 at 12:41
  • hehe dont wory about that ... my experience here is when I post some equation that is not globally search-able (like on wiki or IEEE) then the down votes or not working comments come (even if all is tested / proved / coded / measured or well known on different part of the world...) – Spektre Nov 12 '13 at 07:54
  • PS. if you are going to encode that paper of yours may be worth looking this http://stackoverflow.com/q/18465326/2521214 and http://stackoverflow.com/q/18577076/2521214 for FTT/NTT based multiplication – Spektre Nov 12 '13 at 08:01
  • 1
    I get what you are trying to say (and agree) but in this case c is not really a constant but a function of n !!! ... strongly dependend on the aproximation algorithm used if it wasnt then the complexity of example would be O(1) !!! – Spektre Nov 13 '13 at 12:33
  • ps c is the max difference of the approximation and real value of sqrt + some safety constant. for babylonian aproximation its ~ log(b) ... so log (log (n)) – Spektre Nov 13 '13 at 12:44