- if
b
is small enough then with use of sqrt
table is complexity O(1)
- if not then with use of bit approximation is complexity
O(b/2) = O(log n)
- with use of perfect square table is the complexity also
O(b/2) = O(log n)
but with much faster operations (just compare and bit operations) b
bit number can be a perfect square of max b/2
bit number so the table has 2^(b/2)
entries of b
bit numbers and approximate index search (similar to binary search) takes b/2
steps
- some improvement can be done with approximation
create approx function y=approx_sqrt(x);
and compute y
now you can just check values from < y-c , y+c >
which have runtime ~T(2c)
where c
is constant dependent on the approximation accuracy (1,2,3,...)
. Most approximations are off on the bigger values so you can make c=log(b)
and your complexity is suddenly O(log b) = O(log log n)
which is what you are searching for I think.
Here is my tested implementation of above:
//---------------------------------------------------------------------------
int is_square(int x,int &cnt) // return sqrt(x) if perfect or 0, cnt = num of cycles ~ runtime
{
int y,yy;
// y=aprox_sqrt(x)
for (y=x,yy=x;yy;y>>=1,yy>>=2); // halves the bit count
if (y) y=(y+(x/y))>>1; // babylonian approximation
if (y) y=(y+(x/y))>>1;
if (y) y=(y+(x/y))>>1;
// check estimated y and near values
cnt=1;
yy=y*y; if (yy==x) return y;
if (yy<x) for (;;)
{
cnt++;
y++;
yy=y*y;
if (yy==x) return y;
if (yy> x) return 0;
}
else for (;;)
{
cnt++;
y--;
yy=y*y;
if (yy==x) return y;
if (yy< x) return 0;
}
}
//---------------------------------------------------------------------------
I added cnt for you so you can test the complexity yourself. My used approx need a good start value so I use halving the bit count which is obviously O(log b)
but for bignums and float
values the exponent/bit count is already known so it converts just to single bit/word/base/exponent shift O(1)
. BTW that is the IEEE float magic done by most of the approximations for sqrt
or log
functions.
My measurements are better than my first estimates (even for non-precise babylonian approximation):
/*----------------
| N | T |
------------------
| 1000000000 | 6 |
| 100000000 | 4 |
| 10000000 | 2 |
| 1000000 | 2 |
----------------*/
where N
is loop N
on which it was tested. T
is max cnt
value on tested numbers < 0,N >
for different approximation (better suited for your needs) can look here
So my answer to your question is YES it does exist an algorithm faster than O(log n)
for determining if n
is perfect square (for instance mine above which also compute the sqrt
) but if you are also counting the complexity of basic functions than I am afraid that the answer is NO because even bit operations are O(log n)
on bignums !!!
BTW the binary search sqrt
can be done also without multiplication.